Feb 28 13:16:24 crc systemd[1]: Starting Kubernetes Kubelet... Feb 28 13:16:24 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:24 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 13:16:25 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 28 13:16:26 crc kubenswrapper[4897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 13:16:26 crc kubenswrapper[4897]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 28 13:16:26 crc kubenswrapper[4897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 13:16:26 crc kubenswrapper[4897]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 13:16:26 crc kubenswrapper[4897]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 28 13:16:26 crc kubenswrapper[4897]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.196849 4897 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206762 4897 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206810 4897 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206822 4897 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206836 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206845 4897 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206854 4897 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206863 4897 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206872 4897 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206880 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206887 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206895 4897 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206903 4897 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206911 4897 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206919 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206926 4897 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206934 4897 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206942 4897 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206950 4897 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206957 4897 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206965 4897 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206972 4897 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206979 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.206990 4897 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207000 4897 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207009 4897 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207044 4897 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207052 4897 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207061 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207068 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207076 4897 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207084 4897 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207094 4897 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207104 4897 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207113 4897 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207121 4897 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207131 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207139 4897 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207146 4897 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207156 4897 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207164 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207172 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207179 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207187 4897 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207195 4897 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207202 4897 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207210 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207217 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207226 4897 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207234 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207242 4897 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207249 4897 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207260 4897 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207268 4897 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207277 4897 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207284 4897 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207292 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207299 4897 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207330 4897 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207338 4897 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207349 4897 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207357 4897 feature_gate.go:330] unrecognized feature gate: Example Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207365 4897 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207373 4897 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207380 4897 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207389 4897 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207396 4897 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207404 4897 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207412 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207419 4897 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207427 4897 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.207437 4897 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208388 4897 flags.go:64] FLAG: --address="0.0.0.0" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208415 4897 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208432 4897 flags.go:64] FLAG: --anonymous-auth="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208444 4897 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208456 4897 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208466 4897 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208478 4897 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208500 4897 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208510 4897 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208519 4897 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208529 4897 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208540 4897 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208549 4897 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208558 4897 flags.go:64] FLAG: --cgroup-root="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208567 4897 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208576 4897 flags.go:64] FLAG: --client-ca-file="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208612 4897 flags.go:64] FLAG: --cloud-config="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208621 4897 flags.go:64] FLAG: --cloud-provider="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208630 4897 flags.go:64] FLAG: --cluster-dns="[]" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208641 4897 flags.go:64] FLAG: --cluster-domain="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208650 4897 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208659 4897 flags.go:64] FLAG: --config-dir="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208668 4897 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208678 4897 flags.go:64] FLAG: --container-log-max-files="5" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208689 4897 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208698 4897 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208707 4897 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208717 4897 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208726 4897 flags.go:64] FLAG: --contention-profiling="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208734 4897 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208744 4897 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208754 4897 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208763 4897 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208774 4897 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208783 4897 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208792 4897 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208801 4897 flags.go:64] FLAG: --enable-load-reader="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208809 4897 flags.go:64] FLAG: --enable-server="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208818 4897 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208831 4897 flags.go:64] FLAG: --event-burst="100" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208841 4897 flags.go:64] FLAG: --event-qps="50" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208850 4897 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208859 4897 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208867 4897 flags.go:64] FLAG: --eviction-hard="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208878 4897 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208887 4897 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208896 4897 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208907 4897 flags.go:64] FLAG: --eviction-soft="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208916 4897 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208924 4897 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208934 4897 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208942 4897 flags.go:64] FLAG: --experimental-mounter-path="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208951 4897 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208960 4897 flags.go:64] FLAG: --fail-swap-on="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208969 4897 flags.go:64] FLAG: --feature-gates="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208979 4897 flags.go:64] FLAG: --file-check-frequency="20s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208989 4897 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.208998 4897 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209007 4897 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209017 4897 flags.go:64] FLAG: --healthz-port="10248" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209026 4897 flags.go:64] FLAG: --help="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209035 4897 flags.go:64] FLAG: --hostname-override="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209044 4897 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209055 4897 flags.go:64] FLAG: --http-check-frequency="20s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209065 4897 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209074 4897 flags.go:64] FLAG: --image-credential-provider-config="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209083 4897 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209093 4897 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209102 4897 flags.go:64] FLAG: --image-service-endpoint="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209110 4897 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209119 4897 flags.go:64] FLAG: --kube-api-burst="100" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209129 4897 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209138 4897 flags.go:64] FLAG: --kube-api-qps="50" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209147 4897 flags.go:64] FLAG: --kube-reserved="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209156 4897 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209164 4897 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209173 4897 flags.go:64] FLAG: --kubelet-cgroups="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209182 4897 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209191 4897 flags.go:64] FLAG: --lock-file="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209199 4897 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209208 4897 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209217 4897 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209243 4897 flags.go:64] FLAG: --log-json-split-stream="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209253 4897 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209263 4897 flags.go:64] FLAG: --log-text-split-stream="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209272 4897 flags.go:64] FLAG: --logging-format="text" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209281 4897 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209292 4897 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209300 4897 flags.go:64] FLAG: --manifest-url="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209341 4897 flags.go:64] FLAG: --manifest-url-header="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209354 4897 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209363 4897 flags.go:64] FLAG: --max-open-files="1000000" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209374 4897 flags.go:64] FLAG: --max-pods="110" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209383 4897 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209393 4897 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209403 4897 flags.go:64] FLAG: --memory-manager-policy="None" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209412 4897 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209421 4897 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209430 4897 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209439 4897 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209459 4897 flags.go:64] FLAG: --node-status-max-images="50" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209468 4897 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209478 4897 flags.go:64] FLAG: --oom-score-adj="-999" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209487 4897 flags.go:64] FLAG: --pod-cidr="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209496 4897 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209509 4897 flags.go:64] FLAG: --pod-manifest-path="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209517 4897 flags.go:64] FLAG: --pod-max-pids="-1" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209527 4897 flags.go:64] FLAG: --pods-per-core="0" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209536 4897 flags.go:64] FLAG: --port="10250" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209545 4897 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209554 4897 flags.go:64] FLAG: --provider-id="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209562 4897 flags.go:64] FLAG: --qos-reserved="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209572 4897 flags.go:64] FLAG: --read-only-port="10255" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209581 4897 flags.go:64] FLAG: --register-node="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209589 4897 flags.go:64] FLAG: --register-schedulable="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209598 4897 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209613 4897 flags.go:64] FLAG: --registry-burst="10" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209621 4897 flags.go:64] FLAG: --registry-qps="5" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209630 4897 flags.go:64] FLAG: --reserved-cpus="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209640 4897 flags.go:64] FLAG: --reserved-memory="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209651 4897 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209660 4897 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209669 4897 flags.go:64] FLAG: --rotate-certificates="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209678 4897 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209687 4897 flags.go:64] FLAG: --runonce="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209696 4897 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209705 4897 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209715 4897 flags.go:64] FLAG: --seccomp-default="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209725 4897 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209734 4897 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209743 4897 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209752 4897 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209761 4897 flags.go:64] FLAG: --storage-driver-password="root" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209770 4897 flags.go:64] FLAG: --storage-driver-secure="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209779 4897 flags.go:64] FLAG: --storage-driver-table="stats" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209787 4897 flags.go:64] FLAG: --storage-driver-user="root" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209796 4897 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209805 4897 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209814 4897 flags.go:64] FLAG: --system-cgroups="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209824 4897 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209838 4897 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209847 4897 flags.go:64] FLAG: --tls-cert-file="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209856 4897 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209867 4897 flags.go:64] FLAG: --tls-min-version="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209876 4897 flags.go:64] FLAG: --tls-private-key-file="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209891 4897 flags.go:64] FLAG: --topology-manager-policy="none" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209900 4897 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209909 4897 flags.go:64] FLAG: --topology-manager-scope="container" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209918 4897 flags.go:64] FLAG: --v="2" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209930 4897 flags.go:64] FLAG: --version="false" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209941 4897 flags.go:64] FLAG: --vmodule="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209952 4897 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.209962 4897 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210197 4897 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210211 4897 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210222 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210231 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210240 4897 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210249 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210258 4897 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210267 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210275 4897 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210283 4897 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210290 4897 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210298 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210333 4897 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210342 4897 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210350 4897 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210358 4897 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210366 4897 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210374 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210382 4897 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210390 4897 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210398 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210406 4897 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210414 4897 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210422 4897 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210433 4897 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210441 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210448 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210456 4897 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210467 4897 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210476 4897 feature_gate.go:330] unrecognized feature gate: Example Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210485 4897 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210494 4897 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210502 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210510 4897 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210518 4897 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210528 4897 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210539 4897 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210548 4897 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210562 4897 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210571 4897 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210579 4897 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210586 4897 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210594 4897 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210602 4897 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210610 4897 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210618 4897 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210626 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210634 4897 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210642 4897 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210650 4897 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210658 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210666 4897 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210674 4897 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210682 4897 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210690 4897 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210698 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210708 4897 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210716 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210724 4897 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210734 4897 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210743 4897 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210751 4897 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210760 4897 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210769 4897 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210778 4897 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210786 4897 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210794 4897 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210801 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210809 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210820 4897 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.210830 4897 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.210844 4897 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.224126 4897 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.224186 4897 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224400 4897 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224428 4897 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224438 4897 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224448 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224457 4897 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224467 4897 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224475 4897 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224485 4897 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224493 4897 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224505 4897 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224517 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224526 4897 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224536 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224545 4897 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224554 4897 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224562 4897 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224570 4897 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224618 4897 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224628 4897 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224637 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224645 4897 feature_gate.go:330] unrecognized feature gate: Example Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224653 4897 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224662 4897 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224670 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224678 4897 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224687 4897 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224695 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224707 4897 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224720 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224730 4897 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224741 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224751 4897 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224760 4897 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224769 4897 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224782 4897 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224792 4897 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224803 4897 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224812 4897 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224822 4897 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224831 4897 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224839 4897 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224848 4897 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224856 4897 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224865 4897 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224874 4897 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224882 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224890 4897 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224902 4897 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224913 4897 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224923 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224932 4897 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224941 4897 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224950 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224958 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224968 4897 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224977 4897 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224986 4897 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.224995 4897 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225004 4897 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225012 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225021 4897 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225029 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225038 4897 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225047 4897 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225056 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225064 4897 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225073 4897 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225081 4897 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225090 4897 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225098 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225108 4897 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.225122 4897 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225423 4897 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225446 4897 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225456 4897 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225466 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225475 4897 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225485 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225494 4897 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225504 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225513 4897 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225522 4897 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225531 4897 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225541 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225550 4897 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225559 4897 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225573 4897 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225584 4897 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225594 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225605 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225617 4897 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225628 4897 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225639 4897 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225648 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225657 4897 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225667 4897 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225676 4897 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225684 4897 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225693 4897 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225702 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225712 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225721 4897 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225729 4897 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225737 4897 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225746 4897 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225755 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225764 4897 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225774 4897 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225782 4897 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225790 4897 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225799 4897 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225807 4897 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225816 4897 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225825 4897 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225833 4897 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225841 4897 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225850 4897 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225858 4897 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225866 4897 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225875 4897 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225883 4897 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225892 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225904 4897 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225914 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225923 4897 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225932 4897 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225941 4897 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225951 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225959 4897 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225968 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225976 4897 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225985 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.225993 4897 feature_gate.go:330] unrecognized feature gate: Example Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226002 4897 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226010 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226018 4897 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226026 4897 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226035 4897 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226043 4897 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226051 4897 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226062 4897 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226071 4897 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.226081 4897 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.226095 4897 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.226479 4897 server.go:940] "Client rotation is on, will bootstrap in background" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.231414 4897 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.236370 4897 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.236519 4897 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.238223 4897 server.go:997] "Starting client certificate rotation" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.238278 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.238484 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.267284 4897 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.271677 4897 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.272292 4897 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.290618 4897 log.go:25] "Validated CRI v1 runtime API" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.328174 4897 log.go:25] "Validated CRI v1 image API" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.330491 4897 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.338339 4897 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-28-13-11-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.338387 4897 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.367861 4897 manager.go:217] Machine: {Timestamp:2026-02-28 13:16:26.363459594 +0000 UTC m=+0.605780301 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9a2b8aa6-89dd-4912-990f-d37ff5df66a2 BootID:d2fd8fce-c625-452e-ac59-c8b16ad2bd1e Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:bf:dd:bc Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:bf:dd:bc Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9c:4d:61 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3e:ce:04 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3d:23:7c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6c:27:da Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:50:28:66:ec:92 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:3e:69:fb:15:76 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.368387 4897 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.368617 4897 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.370387 4897 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.370716 4897 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.370769 4897 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.371109 4897 topology_manager.go:138] "Creating topology manager with none policy" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.371128 4897 container_manager_linux.go:303] "Creating device plugin manager" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.371936 4897 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.371987 4897 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.372220 4897 state_mem.go:36] "Initialized new in-memory state store" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.372371 4897 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.377413 4897 kubelet.go:418] "Attempting to sync node with API server" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.377450 4897 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.377489 4897 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.377509 4897 kubelet.go:324] "Adding apiserver pod source" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.377527 4897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.383249 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.383373 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.383427 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.383575 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.383996 4897 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.385410 4897 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.386732 4897 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388497 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388585 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388621 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388642 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388672 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388690 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388707 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388730 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388744 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388757 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388801 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.388814 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.391372 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.392022 4897 server.go:1280] "Started kubelet" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.392600 4897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.392487 4897 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 28 13:16:26 crc systemd[1]: Started Kubernetes Kubelet. Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.398661 4897 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.399608 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.400482 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.400566 4897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.401184 4897 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.401257 4897 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.401629 4897 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.401922 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.402305 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.403057 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.403113 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.403862 4897 factory.go:55] Registering systemd factory Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.403884 4897 factory.go:221] Registration of the systemd container factory successfully Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.409866 4897 factory.go:153] Registering CRI-O factory Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.409904 4897 factory.go:221] Registration of the crio container factory successfully Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.410019 4897 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.410075 4897 factory.go:103] Registering Raw factory Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.410101 4897 manager.go:1196] Started watching for new ooms in manager Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.410600 4897 server.go:460] "Adding debug handlers to kubelet server" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.411194 4897 manager.go:319] Starting recovery of all containers Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.410208 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18986b70fccb584c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.391984204 +0000 UTC m=+0.634304901,LastTimestamp:2026-02-28 13:16:26.391984204 +0000 UTC m=+0.634304901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417631 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417688 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417707 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417724 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417741 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417758 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417775 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417794 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417813 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417831 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417851 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417870 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417889 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417908 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417925 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417941 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417957 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417974 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.417990 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418009 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418024 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418039 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418053 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418069 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418084 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418101 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418120 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418138 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418155 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418170 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418184 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418201 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418215 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418229 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418244 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418259 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418274 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418291 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418328 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418348 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418365 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418381 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418397 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418415 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418430 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418446 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418463 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418479 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418496 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418512 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418526 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418541 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.418564 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.420007 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.420028 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421173 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421238 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421292 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421351 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421393 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421410 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421461 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421478 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421510 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421558 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421591 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421622 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421639 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421657 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421706 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421739 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421756 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421791 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421823 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421854 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421927 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421947 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.421981 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.422017 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.422048 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.422098 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.422128 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.422167 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.422186 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.427817 4897 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.427873 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.427892 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.427915 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.427930 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.427951 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.427967 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.427982 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.428002 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.428018 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.428031 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.428051 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.428063 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.428080 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.429688 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.429814 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.429890 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.429948 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.429983 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430011 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430048 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430109 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430153 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430197 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430240 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430279 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430368 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430418 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430451 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430534 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430582 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430627 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430662 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430712 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430754 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430797 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430828 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430863 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430905 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430937 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.430977 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431010 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431040 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431080 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431110 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431140 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431236 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431273 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431306 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431377 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.431408 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432155 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432192 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432223 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432258 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432287 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432355 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432386 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432413 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432440 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432468 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432497 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432555 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432579 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432601 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432623 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432646 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432666 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432685 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432704 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432726 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432746 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432768 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432791 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432810 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432859 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432879 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432899 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432920 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432941 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432965 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.432987 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433009 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433029 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433050 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433069 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433088 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433110 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433130 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433149 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433168 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433187 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433209 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433229 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433249 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433269 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433289 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433338 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433360 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433382 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433401 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433418 4897 manager.go:324] Recovery completed Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.433424 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435741 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435771 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435782 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435793 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435814 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435825 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435836 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435846 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435856 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435867 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435878 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435888 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435899 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435909 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435919 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435928 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435938 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435948 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435958 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435967 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435976 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435988 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.435996 4897 reconstruct.go:97] "Volume reconstruction finished" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.436003 4897 reconciler.go:26] "Reconciler: start to sync state" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.443674 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.444982 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.445029 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.445048 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.445899 4897 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.445925 4897 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.445956 4897 state_mem.go:36] "Initialized new in-memory state store" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.453117 4897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.454924 4897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.454987 4897 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.455025 4897 kubelet.go:2335] "Starting kubelet main sync loop" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.455248 4897 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.455671 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.455810 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.466936 4897 policy_none.go:49] "None policy: Start" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.468178 4897 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.468218 4897 state_mem.go:35] "Initializing new in-memory state store" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.502046 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.520567 4897 manager.go:334] "Starting Device Plugin manager" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.520716 4897 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.520729 4897 server.go:79] "Starting device plugin registration server" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.521097 4897 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.521113 4897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.521269 4897 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.521432 4897 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.521445 4897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.528112 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.556437 4897 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.556567 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.558336 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.558375 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.558401 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.558541 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.558812 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.558870 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560144 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560156 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560197 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560323 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560922 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.560959 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.562061 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.562149 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.562172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.562410 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.562657 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.562754 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.562976 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.563013 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.563025 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564114 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564169 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564194 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564239 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564248 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564386 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564558 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.564609 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.565277 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.565344 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.565366 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.565566 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.565602 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.565686 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.565724 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.565740 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.566623 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.566657 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.566668 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.603499 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.622222 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.623720 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.623784 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.623797 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.623846 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.624379 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638157 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638385 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638561 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638634 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638682 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638723 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638801 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638830 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638863 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638909 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638953 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.638991 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.639022 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.639069 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.740727 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.740803 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.740849 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.740891 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.740969 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741003 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.740978 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741049 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741053 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741102 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.740998 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741145 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741206 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741233 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741261 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741290 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741381 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741411 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741818 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741876 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741922 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.741998 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.742040 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.742083 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.742126 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.742169 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.742216 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.742220 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.825339 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.827482 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.827551 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.827569 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.827604 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:16:26 crc kubenswrapper[4897]: E0228 13:16:26.828215 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.896684 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.925239 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.939257 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.940213 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4331883e6437d62d3afeaa3b9b71ea9a4db483f3e4b3d12255901ee2f3209915 WatchSource:0}: Error finding container 4331883e6437d62d3afeaa3b9b71ea9a4db483f3e4b3d12255901ee2f3209915: Status 404 returned error can't find the container with id 4331883e6437d62d3afeaa3b9b71ea9a4db483f3e4b3d12255901ee2f3209915 Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.955396 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-84f8bd93af3e9da6081e14567ec6e2c065babf4d307e79a414dca54b0087596e WatchSource:0}: Error finding container 84f8bd93af3e9da6081e14567ec6e2c065babf4d307e79a414dca54b0087596e: Status 404 returned error can't find the container with id 84f8bd93af3e9da6081e14567ec6e2c065babf4d307e79a414dca54b0087596e Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.963806 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6712eed282cf966a59325acd30d9293545c5c1569a23cdfddc99036c18fe30f2 WatchSource:0}: Error finding container 6712eed282cf966a59325acd30d9293545c5c1569a23cdfddc99036c18fe30f2: Status 404 returned error can't find the container with id 6712eed282cf966a59325acd30d9293545c5c1569a23cdfddc99036c18fe30f2 Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.965102 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: I0228 13:16:26.975670 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.988344 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0424f2dfc525a7fe529bd297041e7f72bc0f34dd790b9ecfd3c19ea9272e00b5 WatchSource:0}: Error finding container 0424f2dfc525a7fe529bd297041e7f72bc0f34dd790b9ecfd3c19ea9272e00b5: Status 404 returned error can't find the container with id 0424f2dfc525a7fe529bd297041e7f72bc0f34dd790b9ecfd3c19ea9272e00b5 Feb 28 13:16:26 crc kubenswrapper[4897]: W0228 13:16:26.994641 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-726f9ca88e8cf19dfbd564527947c36ba68caac06daabc5f0f00a193530cbe98 WatchSource:0}: Error finding container 726f9ca88e8cf19dfbd564527947c36ba68caac06daabc5f0f00a193530cbe98: Status 404 returned error can't find the container with id 726f9ca88e8cf19dfbd564527947c36ba68caac06daabc5f0f00a193530cbe98 Feb 28 13:16:27 crc kubenswrapper[4897]: E0228 13:16:27.004629 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.229150 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.230936 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.231030 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.231084 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.231131 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:16:27 crc kubenswrapper[4897]: E0228 13:16:27.231944 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 28 13:16:27 crc kubenswrapper[4897]: W0228 13:16:27.300184 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:27 crc kubenswrapper[4897]: E0228 13:16:27.300332 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.400705 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:27 crc kubenswrapper[4897]: W0228 13:16:27.437214 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:27 crc kubenswrapper[4897]: E0228 13:16:27.437304 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.477689 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"726f9ca88e8cf19dfbd564527947c36ba68caac06daabc5f0f00a193530cbe98"} Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.478981 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0424f2dfc525a7fe529bd297041e7f72bc0f34dd790b9ecfd3c19ea9272e00b5"} Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.479924 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6712eed282cf966a59325acd30d9293545c5c1569a23cdfddc99036c18fe30f2"} Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.480776 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"84f8bd93af3e9da6081e14567ec6e2c065babf4d307e79a414dca54b0087596e"} Feb 28 13:16:27 crc kubenswrapper[4897]: I0228 13:16:27.481807 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4331883e6437d62d3afeaa3b9b71ea9a4db483f3e4b3d12255901ee2f3209915"} Feb 28 13:16:27 crc kubenswrapper[4897]: W0228 13:16:27.567941 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:27 crc kubenswrapper[4897]: E0228 13:16:27.568017 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:27 crc kubenswrapper[4897]: E0228 13:16:27.806692 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Feb 28 13:16:27 crc kubenswrapper[4897]: W0228 13:16:27.874851 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:27 crc kubenswrapper[4897]: E0228 13:16:27.874963 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.032438 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.035271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.035368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.035389 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.035435 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:16:28 crc kubenswrapper[4897]: E0228 13:16:28.036150 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.400697 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.416067 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 13:16:28 crc kubenswrapper[4897]: E0228 13:16:28.417335 4897 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.487354 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc" exitCode=0 Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.487472 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.487480 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc"} Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.488667 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.488707 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.488725 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.489193 4897 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6" exitCode=0 Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.489279 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6"} Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.489479 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.491534 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.491591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.491609 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.491701 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.493098 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.493156 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.493213 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.494513 4897 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d295150d16c402f14c3c67c120c28a2af6c908f52f6bbd462c41105d2a85d9a1" exitCode=0 Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.494579 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d295150d16c402f14c3c67c120c28a2af6c908f52f6bbd462c41105d2a85d9a1"} Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.494595 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.495673 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.495709 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.495723 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.496569 4897 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b" exitCode=0 Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.496659 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.496700 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b"} Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.497929 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.497979 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.497999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.500952 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c"} Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.500992 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78"} Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.501006 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422"} Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.501020 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79"} Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.501063 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.502483 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.502546 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:28 crc kubenswrapper[4897]: I0228 13:16:28.502572 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.400642 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:29 crc kubenswrapper[4897]: E0228 13:16:29.407160 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.504591 4897 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6" exitCode=0 Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.504686 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.504704 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6"} Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.505576 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.505615 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.505630 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.512658 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ed409410f11bb36ef18ebe7d8ac2d239b6821eaa1dfed94692ed27e06b4ece50"} Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.512725 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.513426 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.513456 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.513466 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.518148 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b"} Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.518184 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549"} Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.518194 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712"} Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.518385 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.520933 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.520961 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.520969 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.525895 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488"} Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.525939 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925"} Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.525947 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.525950 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b"} Feb 28 13:16:29 crc kubenswrapper[4897]: W0228 13:16:29.526409 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:29 crc kubenswrapper[4897]: E0228 13:16:29.526546 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.527793 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.527832 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.527847 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:29 crc kubenswrapper[4897]: W0228 13:16:29.547924 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 28 13:16:29 crc kubenswrapper[4897]: E0228 13:16:29.548001 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.637296 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.638334 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.638371 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.638384 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.638407 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:16:29 crc kubenswrapper[4897]: E0228 13:16:29.638807 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 28 13:16:29 crc kubenswrapper[4897]: I0228 13:16:29.985615 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.533221 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f"} Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.533370 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"febe0288776e1cf23cf294af7809efb8abaef27e0cd89af6d117798d7ddf2e13"} Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.533377 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.534605 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.534640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.534652 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.536015 4897 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d" exitCode=0 Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.536086 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d"} Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.536121 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.536154 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.536194 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.536256 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.536204 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.537389 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.537428 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.537445 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538208 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538242 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538254 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538737 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538750 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538822 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538832 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:30 crc kubenswrapper[4897]: I0228 13:16:30.538837 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.108346 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.545093 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd666d737612906cd221c611576137054485e66782603973709d756be628e71c"} Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.545172 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6"} Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.545206 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b"} Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.545235 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.545334 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.545354 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.546285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.546319 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.546328 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.547955 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.547983 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:31 crc kubenswrapper[4897]: I0228 13:16:31.547995 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.225655 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.243238 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.252662 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.553758 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da"} Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.553850 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207"} Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.553883 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.553997 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.554078 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.555867 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.555957 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.555995 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.556687 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.556749 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.556772 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.556786 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.556836 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.556859 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.575488 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.839752 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.841157 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.841202 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.841214 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.841238 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.985753 4897 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 13:16:32 crc kubenswrapper[4897]: I0228 13:16:32.985859 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.556279 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.556348 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.558109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.558173 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.558195 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.558203 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.558246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.558269 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.789432 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.789622 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.791008 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.791066 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:33 crc kubenswrapper[4897]: I0228 13:16:33.791078 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.307830 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.308199 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.309939 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.309997 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.310007 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.859955 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.860294 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.862251 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.862380 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:34 crc kubenswrapper[4897]: I0228 13:16:34.862401 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:35 crc kubenswrapper[4897]: I0228 13:16:35.138979 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:35 crc kubenswrapper[4897]: I0228 13:16:35.139179 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:35 crc kubenswrapper[4897]: I0228 13:16:35.140481 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:35 crc kubenswrapper[4897]: I0228 13:16:35.140533 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:35 crc kubenswrapper[4897]: I0228 13:16:35.140554 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:36 crc kubenswrapper[4897]: E0228 13:16:36.528269 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:16:36 crc kubenswrapper[4897]: I0228 13:16:36.802236 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 28 13:16:36 crc kubenswrapper[4897]: I0228 13:16:36.804559 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:36 crc kubenswrapper[4897]: I0228 13:16:36.807522 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:36 crc kubenswrapper[4897]: I0228 13:16:36.807594 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:36 crc kubenswrapper[4897]: I0228 13:16:36.807615 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:40 crc kubenswrapper[4897]: W0228 13:16:40.316922 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 28 13:16:40 crc kubenswrapper[4897]: I0228 13:16:40.317022 4897 trace.go:236] Trace[73724934]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Feb-2026 13:16:30.315) (total time: 10001ms): Feb 28 13:16:40 crc kubenswrapper[4897]: Trace[73724934]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:16:40.316) Feb 28 13:16:40 crc kubenswrapper[4897]: Trace[73724934]: [10.001730075s] [10.001730075s] END Feb 28 13:16:40 crc kubenswrapper[4897]: E0228 13:16:40.317047 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 28 13:16:40 crc kubenswrapper[4897]: I0228 13:16:40.401570 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 28 13:16:40 crc kubenswrapper[4897]: W0228 13:16:40.692709 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 28 13:16:40 crc kubenswrapper[4897]: I0228 13:16:40.692814 4897 trace.go:236] Trace[1913097598]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Feb-2026 13:16:30.691) (total time: 10001ms): Feb 28 13:16:40 crc kubenswrapper[4897]: Trace[1913097598]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:16:40.692) Feb 28 13:16:40 crc kubenswrapper[4897]: Trace[1913097598]: [10.001612452s] [10.001612452s] END Feb 28 13:16:40 crc kubenswrapper[4897]: E0228 13:16:40.692842 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 28 13:16:40 crc kubenswrapper[4897]: E0228 13:16:40.971430 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:40Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18986b70fccb584c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.391984204 +0000 UTC m=+0.634304901,LastTimestamp:2026-02-28 13:16:26.391984204 +0000 UTC m=+0.634304901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:40 crc kubenswrapper[4897]: E0228 13:16:40.983546 4897 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 13:16:40 crc kubenswrapper[4897]: E0228 13:16:40.988640 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:40Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 28 13:16:40 crc kubenswrapper[4897]: W0228 13:16:40.992084 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:40Z is after 2026-02-23T05:33:13Z Feb 28 13:16:40 crc kubenswrapper[4897]: E0228 13:16:40.992175 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 13:16:40 crc kubenswrapper[4897]: E0228 13:16:40.992773 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:40Z is after 2026-02-23T05:33:13Z" node="crc" Feb 28 13:16:40 crc kubenswrapper[4897]: W0228 13:16:40.997563 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:40Z is after 2026-02-23T05:33:13Z Feb 28 13:16:40 crc kubenswrapper[4897]: E0228 13:16:40.997658 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 13:16:40 crc kubenswrapper[4897]: I0228 13:16:40.999194 4897 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]log ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]etcd ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/generic-apiserver-start-informers ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-filter ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/start-apiextensions-informers ok Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/start-system-namespaces-controller ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/start-service-ip-repair-controllers failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/start-kube-aggregator-informers ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 28 13:16:40 crc kubenswrapper[4897]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]autoregister-completion ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapi-controller ok Feb 28 13:16:40 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 28 13:16:40 crc kubenswrapper[4897]: livez check failed Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:40.999285 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.005800 4897 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]log ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]etcd ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/generic-apiserver-start-informers ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-filter ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/start-apiextensions-informers ok Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/start-system-namespaces-controller ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/start-service-ip-repair-controllers failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/start-kube-aggregator-informers ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 28 13:16:41 crc kubenswrapper[4897]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]autoregister-completion ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapi-controller ok Feb 28 13:16:41 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 28 13:16:41 crc kubenswrapper[4897]: livez check failed Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.005864 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.006436 4897 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52738->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.006472 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52738->192.168.126.11:17697: read: connection reset by peer" Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.404276 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:41Z is after 2026-02-23T05:33:13Z Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.578258 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.581185 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="febe0288776e1cf23cf294af7809efb8abaef27e0cd89af6d117798d7ddf2e13" exitCode=255 Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.581232 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"febe0288776e1cf23cf294af7809efb8abaef27e0cd89af6d117798d7ddf2e13"} Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.581398 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.582383 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.582445 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.582464 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:41 crc kubenswrapper[4897]: I0228 13:16:41.583395 4897 scope.go:117] "RemoveContainer" containerID="febe0288776e1cf23cf294af7809efb8abaef27e0cd89af6d117798d7ddf2e13" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.231425 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.231575 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.232518 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.232560 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.232573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.403192 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:42Z is after 2026-02-23T05:33:13Z Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.585764 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.588160 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f"} Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.588428 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.589289 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.589390 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.589416 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.986005 4897 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 13:16:42 crc kubenswrapper[4897]: I0228 13:16:42.986132 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.404283 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:43Z is after 2026-02-23T05:33:13Z Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.594855 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.596691 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.599058 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f" exitCode=255 Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.599115 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f"} Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.599180 4897 scope.go:117] "RemoveContainer" containerID="febe0288776e1cf23cf294af7809efb8abaef27e0cd89af6d117798d7ddf2e13" Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.599426 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.600949 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.601024 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.601053 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:43 crc kubenswrapper[4897]: I0228 13:16:43.601849 4897 scope.go:117] "RemoveContainer" containerID="d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f" Feb 28 13:16:43 crc kubenswrapper[4897]: E0228 13:16:43.602140 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:16:44 crc kubenswrapper[4897]: W0228 13:16:44.016001 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:44Z is after 2026-02-23T05:33:13Z Feb 28 13:16:44 crc kubenswrapper[4897]: E0228 13:16:44.016467 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 13:16:44 crc kubenswrapper[4897]: I0228 13:16:44.405889 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:44Z is after 2026-02-23T05:33:13Z Feb 28 13:16:44 crc kubenswrapper[4897]: I0228 13:16:44.603997 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 13:16:44 crc kubenswrapper[4897]: I0228 13:16:44.894143 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 28 13:16:44 crc kubenswrapper[4897]: I0228 13:16:44.894493 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:44 crc kubenswrapper[4897]: I0228 13:16:44.896061 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:44 crc kubenswrapper[4897]: I0228 13:16:44.896113 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:44 crc kubenswrapper[4897]: I0228 13:16:44.896132 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:44 crc kubenswrapper[4897]: I0228 13:16:44.916776 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.143474 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.143684 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.145151 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.145204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.145218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.145943 4897 scope.go:117] "RemoveContainer" containerID="d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f" Feb 28 13:16:45 crc kubenswrapper[4897]: E0228 13:16:45.146203 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.148991 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.405415 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:45Z is after 2026-02-23T05:33:13Z Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.610032 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.610726 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.611248 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.611277 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.611286 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.611716 4897 scope.go:117] "RemoveContainer" containerID="d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f" Feb 28 13:16:45 crc kubenswrapper[4897]: E0228 13:16:45.611867 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.612136 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.612226 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:45 crc kubenswrapper[4897]: I0228 13:16:45.612247 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:45 crc kubenswrapper[4897]: W0228 13:16:45.709129 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:45Z is after 2026-02-23T05:33:13Z Feb 28 13:16:45 crc kubenswrapper[4897]: E0228 13:16:45.709211 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:45Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 13:16:46 crc kubenswrapper[4897]: I0228 13:16:46.404862 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:16:46Z is after 2026-02-23T05:33:13Z Feb 28 13:16:46 crc kubenswrapper[4897]: E0228 13:16:46.528433 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.393281 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.395477 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.395567 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.395587 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.395626 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:16:47 crc kubenswrapper[4897]: E0228 13:16:47.396957 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 13:16:47 crc kubenswrapper[4897]: E0228 13:16:47.397433 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.408684 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.467762 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.468275 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.469951 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.470024 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.470044 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:47 crc kubenswrapper[4897]: I0228 13:16:47.470970 4897 scope.go:117] "RemoveContainer" containerID="d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f" Feb 28 13:16:47 crc kubenswrapper[4897]: E0228 13:16:47.471298 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:16:48 crc kubenswrapper[4897]: I0228 13:16:48.406971 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:49 crc kubenswrapper[4897]: I0228 13:16:49.065572 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 13:16:49 crc kubenswrapper[4897]: I0228 13:16:49.084253 4897 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 28 13:16:49 crc kubenswrapper[4897]: I0228 13:16:49.405535 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:50 crc kubenswrapper[4897]: I0228 13:16:50.409455 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:50 crc kubenswrapper[4897]: I0228 13:16:50.977460 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:16:50 crc kubenswrapper[4897]: I0228 13:16:50.977715 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:50 crc kubenswrapper[4897]: I0228 13:16:50.979248 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:50 crc kubenswrapper[4897]: I0228 13:16:50.979286 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:50 crc kubenswrapper[4897]: I0228 13:16:50.979305 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:50 crc kubenswrapper[4897]: I0228 13:16:50.980087 4897 scope.go:117] "RemoveContainer" containerID="d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f" Feb 28 13:16:50 crc kubenswrapper[4897]: E0228 13:16:50.980399 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.573276 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fccb584c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.391984204 +0000 UTC m=+0.634304901,LastTimestamp:2026-02-28 13:16:26.391984204 +0000 UTC m=+0.634304901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: I0228 13:16:51.573982 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.579001 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff48e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,LastTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.583148 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff4f01f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,LastTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.590190 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff533ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445059055 +0000 UTC m=+0.687379752,LastTimestamp:2026-02-28 13:16:26.445059055 +0000 UTC m=+0.687379752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.594712 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b7104abf042 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.524143682 +0000 UTC m=+0.766464339,LastTimestamp:2026-02-28 13:16:26.524143682 +0000 UTC m=+0.766464339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.601161 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff48e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff48e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,LastTimestamp:2026-02-28 13:16:26.558360788 +0000 UTC m=+0.800681435,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.606121 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff4f01f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff4f01f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,LastTimestamp:2026-02-28 13:16:26.558381428 +0000 UTC m=+0.800702085,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.610644 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff533ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff533ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445059055 +0000 UTC m=+0.687379752,LastTimestamp:2026-02-28 13:16:26.558407389 +0000 UTC m=+0.800728046,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.614910 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff48e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff48e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,LastTimestamp:2026-02-28 13:16:26.560131966 +0000 UTC m=+0.802452633,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.619927 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff4f01f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff4f01f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,LastTimestamp:2026-02-28 13:16:26.560152336 +0000 UTC m=+0.802473013,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.624604 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff533ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff533ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445059055 +0000 UTC m=+0.687379752,LastTimestamp:2026-02-28 13:16:26.560162746 +0000 UTC m=+0.802483423,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.631287 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff48e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff48e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,LastTimestamp:2026-02-28 13:16:26.560218326 +0000 UTC m=+0.802539013,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.636199 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff4f01f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff4f01f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,LastTimestamp:2026-02-28 13:16:26.560240207 +0000 UTC m=+0.802560904,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.642505 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff533ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff533ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445059055 +0000 UTC m=+0.687379752,LastTimestamp:2026-02-28 13:16:26.560255427 +0000 UTC m=+0.802576114,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.648812 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff48e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff48e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,LastTimestamp:2026-02-28 13:16:26.562121955 +0000 UTC m=+0.804442642,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.653644 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff4f01f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff4f01f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,LastTimestamp:2026-02-28 13:16:26.562164476 +0000 UTC m=+0.804485173,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.659870 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff533ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff533ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445059055 +0000 UTC m=+0.687379752,LastTimestamp:2026-02-28 13:16:26.562182006 +0000 UTC m=+0.804502703,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.665586 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff48e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff48e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,LastTimestamp:2026-02-28 13:16:26.562998884 +0000 UTC m=+0.805319551,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.669444 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff4f01f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff4f01f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,LastTimestamp:2026-02-28 13:16:26.563021064 +0000 UTC m=+0.805341731,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.672979 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff533ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff533ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445059055 +0000 UTC m=+0.687379752,LastTimestamp:2026-02-28 13:16:26.563032864 +0000 UTC m=+0.805353531,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.677532 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff48e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff48e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,LastTimestamp:2026-02-28 13:16:26.564144455 +0000 UTC m=+0.806465142,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.682801 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff4f01f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff4f01f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,LastTimestamp:2026-02-28 13:16:26.564182095 +0000 UTC m=+0.806502792,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.686506 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff533ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff533ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445059055 +0000 UTC m=+0.687379752,LastTimestamp:2026-02-28 13:16:26.564205805 +0000 UTC m=+0.806526502,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.690765 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff48e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff48e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445016665 +0000 UTC m=+0.687337362,LastTimestamp:2026-02-28 13:16:26.564233276 +0000 UTC m=+0.806553933,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.695161 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18986b70fff4f01f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18986b70fff4f01f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.445041695 +0000 UTC m=+0.687362392,LastTimestamp:2026-02-28 13:16:26.564244386 +0000 UTC m=+0.806565043,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.700654 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18986b711e099abc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.949712572 +0000 UTC m=+1.192033239,LastTimestamp:2026-02-28 13:16:26.949712572 +0000 UTC m=+1.192033239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.703650 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b711ec8cc41 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.962242625 +0000 UTC m=+1.204563292,LastTimestamp:2026-02-28 13:16:26.962242625 +0000 UTC m=+1.204563292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.707419 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b711f4068de openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.970081502 +0000 UTC m=+1.212402169,LastTimestamp:2026-02-28 13:16:26.970081502 +0000 UTC m=+1.212402169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.711917 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b71209a4d50 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.992749904 +0000 UTC m=+1.235070571,LastTimestamp:2026-02-28 13:16:26.992749904 +0000 UTC m=+1.235070571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.716409 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b7121004e94 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:26.9994349 +0000 UTC m=+1.241755567,LastTimestamp:2026-02-28 13:16:26.9994349 +0000 UTC m=+1.241755567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.720395 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b713de602b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.484250801 +0000 UTC m=+1.726571458,LastTimestamp:2026-02-28 13:16:27.484250801 +0000 UTC m=+1.726571458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.724453 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b713dfb5a11 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.485649425 +0000 UTC m=+1.727970092,LastTimestamp:2026-02-28 13:16:27.485649425 +0000 UTC m=+1.727970092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.729457 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b713dfbe3c3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.485684675 +0000 UTC m=+1.728005332,LastTimestamp:2026-02-28 13:16:27.485684675 +0000 UTC m=+1.728005332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.733133 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18986b713e27daf4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.488566004 +0000 UTC m=+1.730886661,LastTimestamp:2026-02-28 13:16:27.488566004 +0000 UTC m=+1.730886661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.738305 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b713e505426 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.49121847 +0000 UTC m=+1.733539127,LastTimestamp:2026-02-28 13:16:27.49121847 +0000 UTC m=+1.733539127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.744369 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b713eb512b7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.497820855 +0000 UTC m=+1.740141512,LastTimestamp:2026-02-28 13:16:27.497820855 +0000 UTC m=+1.740141512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.748084 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b713ed14863 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.499669603 +0000 UTC m=+1.741990280,LastTimestamp:2026-02-28 13:16:27.499669603 +0000 UTC m=+1.741990280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.751847 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b713ed633d0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.499992016 +0000 UTC m=+1.742312693,LastTimestamp:2026-02-28 13:16:27.499992016 +0000 UTC m=+1.742312693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.755746 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b713ef1ad8e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.501792654 +0000 UTC m=+1.744113351,LastTimestamp:2026-02-28 13:16:27.501792654 +0000 UTC m=+1.744113351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.759207 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18986b713fadef81 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.514130305 +0000 UTC m=+1.756450962,LastTimestamp:2026-02-28 13:16:27.514130305 +0000 UTC m=+1.756450962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.763604 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b713fb1b4ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.514377387 +0000 UTC m=+1.756698044,LastTimestamp:2026-02-28 13:16:27.514377387 +0000 UTC m=+1.756698044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.766342 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b714f81d50b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.779675403 +0000 UTC m=+2.021996070,LastTimestamp:2026-02-28 13:16:27.779675403 +0000 UTC m=+2.021996070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.769789 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b71502bbbec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.790810092 +0000 UTC m=+2.033130759,LastTimestamp:2026-02-28 13:16:27.790810092 +0000 UTC m=+2.033130759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.773751 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b71503fe4fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.792131325 +0000 UTC m=+2.034451992,LastTimestamp:2026-02-28 13:16:27.792131325 +0000 UTC m=+2.034451992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.779831 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b715f268eec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.042129132 +0000 UTC m=+2.284449809,LastTimestamp:2026-02-28 13:16:28.042129132 +0000 UTC m=+2.284449809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.785236 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b71600cd836 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.057221174 +0000 UTC m=+2.299541861,LastTimestamp:2026-02-28 13:16:28.057221174 +0000 UTC m=+2.299541861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.789228 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b71602166f8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.05856844 +0000 UTC m=+2.300889127,LastTimestamp:2026-02-28 13:16:28.05856844 +0000 UTC m=+2.300889127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.793370 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b716cd5c0b9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.271714489 +0000 UTC m=+2.514035186,LastTimestamp:2026-02-28 13:16:28.271714489 +0000 UTC m=+2.514035186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.796855 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b716e257d57 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.293717335 +0000 UTC m=+2.536038022,LastTimestamp:2026-02-28 13:16:28.293717335 +0000 UTC m=+2.536038022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.801048 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b7179eed330 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.491461424 +0000 UTC m=+2.733782121,LastTimestamp:2026-02-28 13:16:28.491461424 +0000 UTC m=+2.733782121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.805769 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b717a1337d8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.493846488 +0000 UTC m=+2.736167185,LastTimestamp:2026-02-28 13:16:28.493846488 +0000 UTC m=+2.736167185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.812042 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18986b717a4a2ee2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.497448674 +0000 UTC m=+2.739769371,LastTimestamp:2026-02-28 13:16:28.497448674 +0000 UTC m=+2.739769371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.816644 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b717a694790 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.499486608 +0000 UTC m=+2.741807295,LastTimestamp:2026-02-28 13:16:28.499486608 +0000 UTC m=+2.741807295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.821635 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b718aaf1729 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.772497193 +0000 UTC m=+3.014817850,LastTimestamp:2026-02-28 13:16:28.772497193 +0000 UTC m=+3.014817850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.826604 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b718ac2b73d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.773783357 +0000 UTC m=+3.016104014,LastTimestamp:2026-02-28 13:16:28.773783357 +0000 UTC m=+3.016104014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.830698 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b718ac397b3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.773840819 +0000 UTC m=+3.016161476,LastTimestamp:2026-02-28 13:16:28.773840819 +0000 UTC m=+3.016161476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.834226 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18986b718ac58b2a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.773968682 +0000 UTC m=+3.016289339,LastTimestamp:2026-02-28 13:16:28.773968682 +0000 UTC m=+3.016289339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.838341 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b718c1c6bb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.796439481 +0000 UTC m=+3.038760148,LastTimestamp:2026-02-28 13:16:28.796439481 +0000 UTC m=+3.038760148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.842740 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b718c2c5f4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.797484879 +0000 UTC m=+3.039805556,LastTimestamp:2026-02-28 13:16:28.797484879 +0000 UTC m=+3.039805556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.846903 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18986b718c404ef6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.798791414 +0000 UTC m=+3.041112081,LastTimestamp:2026-02-28 13:16:28.798791414 +0000 UTC m=+3.041112081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.852998 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b718c656c30 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.801223728 +0000 UTC m=+3.043544405,LastTimestamp:2026-02-28 13:16:28.801223728 +0000 UTC m=+3.043544405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.858156 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b718cd76055 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:28.808691797 +0000 UTC m=+3.051012464,LastTimestamp:2026-02-28 13:16:28.808691797 +0000 UTC m=+3.051012464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.863264 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b719e967055 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.106425941 +0000 UTC m=+3.348746598,LastTimestamp:2026-02-28 13:16:29.106425941 +0000 UTC m=+3.348746598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.868529 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b719eb9d335 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.108745013 +0000 UTC m=+3.351065710,LastTimestamp:2026-02-28 13:16:29.108745013 +0000 UTC m=+3.351065710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.873099 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b719f64df36 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.119954742 +0000 UTC m=+3.362275399,LastTimestamp:2026-02-28 13:16:29.119954742 +0000 UTC m=+3.362275399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.877240 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b719f775e6a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.121166954 +0000 UTC m=+3.363487611,LastTimestamp:2026-02-28 13:16:29.121166954 +0000 UTC m=+3.363487611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.881546 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b719fccf73a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.126776634 +0000 UTC m=+3.369097321,LastTimestamp:2026-02-28 13:16:29.126776634 +0000 UTC m=+3.369097321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.886509 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b719fdc6bdb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.127789531 +0000 UTC m=+3.370110188,LastTimestamp:2026-02-28 13:16:29.127789531 +0000 UTC m=+3.370110188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.891265 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b71aaba1514 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.310088468 +0000 UTC m=+3.552409125,LastTimestamp:2026-02-28 13:16:29.310088468 +0000 UTC m=+3.552409125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.896421 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71aad2f722 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.311719202 +0000 UTC m=+3.554039859,LastTimestamp:2026-02-28 13:16:29.311719202 +0000 UTC m=+3.554039859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.901089 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b71aad853b5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.312070581 +0000 UTC m=+3.554391238,LastTimestamp:2026-02-28 13:16:29.312070581 +0000 UTC m=+3.554391238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.904641 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18986b71ab8b664c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.323806284 +0000 UTC m=+3.566126941,LastTimestamp:2026-02-28 13:16:29.323806284 +0000 UTC m=+3.566126941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.908189 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71ab8e91ff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.324014079 +0000 UTC m=+3.566334736,LastTimestamp:2026-02-28 13:16:29.324014079 +0000 UTC m=+3.566334736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.911742 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71ab9d7333 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.324989235 +0000 UTC m=+3.567309882,LastTimestamp:2026-02-28 13:16:29.324989235 +0000 UTC m=+3.567309882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.915530 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71b5eeeb3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.498100538 +0000 UTC m=+3.740421195,LastTimestamp:2026-02-28 13:16:29.498100538 +0000 UTC m=+3.740421195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.919913 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b71b6c84934 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.512345908 +0000 UTC m=+3.754666565,LastTimestamp:2026-02-28 13:16:29.512345908 +0000 UTC m=+3.754666565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.924031 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71b70c23aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.516792746 +0000 UTC m=+3.759113403,LastTimestamp:2026-02-28 13:16:29.516792746 +0000 UTC m=+3.759113403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.927799 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71b71a9760 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.517739872 +0000 UTC m=+3.760060529,LastTimestamp:2026-02-28 13:16:29.517739872 +0000 UTC m=+3.760060529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.932506 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71c21205a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.701727654 +0000 UTC m=+3.944048311,LastTimestamp:2026-02-28 13:16:29.701727654 +0000 UTC m=+3.944048311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.936008 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b71c29ce133 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.710827827 +0000 UTC m=+3.953148484,LastTimestamp:2026-02-28 13:16:29.710827827 +0000 UTC m=+3.953148484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.939912 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71c2f87967 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.716830567 +0000 UTC m=+3.959151224,LastTimestamp:2026-02-28 13:16:29.716830567 +0000 UTC m=+3.959151224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.944239 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b71c36691d3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.724045779 +0000 UTC m=+3.966366436,LastTimestamp:2026-02-28 13:16:29.724045779 +0000 UTC m=+3.966366436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.948717 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b71f419aecf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:30.541090511 +0000 UTC m=+4.783411178,LastTimestamp:2026-02-28 13:16:30.541090511 +0000 UTC m=+4.783411178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.952105 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b7201cbab7a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:30.770858874 +0000 UTC m=+5.013179571,LastTimestamp:2026-02-28 13:16:30.770858874 +0000 UTC m=+5.013179571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.955383 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b72026c81e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:30.781399524 +0000 UTC m=+5.023720221,LastTimestamp:2026-02-28 13:16:30.781399524 +0000 UTC m=+5.023720221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.958825 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b72028185ff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:30.782776831 +0000 UTC m=+5.025097538,LastTimestamp:2026-02-28 13:16:30.782776831 +0000 UTC m=+5.025097538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.963625 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b72122f4f05 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.045824261 +0000 UTC m=+5.288144948,LastTimestamp:2026-02-28 13:16:31.045824261 +0000 UTC m=+5.288144948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.967972 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b721343414a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.063908682 +0000 UTC m=+5.306229369,LastTimestamp:2026-02-28 13:16:31.063908682 +0000 UTC m=+5.306229369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.973292 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b72135582b2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.065105074 +0000 UTC m=+5.307425771,LastTimestamp:2026-02-28 13:16:31.065105074 +0000 UTC m=+5.307425771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.977010 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b72236689e7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.334656487 +0000 UTC m=+5.576977154,LastTimestamp:2026-02-28 13:16:31.334656487 +0000 UTC m=+5.576977154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.980610 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b722402d8ea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.34490033 +0000 UTC m=+5.587220997,LastTimestamp:2026-02-28 13:16:31.34490033 +0000 UTC m=+5.587220997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.987005 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b7224114033 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.345844275 +0000 UTC m=+5.588164952,LastTimestamp:2026-02-28 13:16:31.345844275 +0000 UTC m=+5.588164952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.991601 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b72324f1208 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.584776712 +0000 UTC m=+5.827097409,LastTimestamp:2026-02-28 13:16:31.584776712 +0000 UTC m=+5.827097409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:51 crc kubenswrapper[4897]: E0228 13:16:51.997934 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b7232f35fef openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.595544559 +0000 UTC m=+5.837865216,LastTimestamp:2026-02-28 13:16:31.595544559 +0000 UTC m=+5.837865216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.004245 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b72330843a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.596913576 +0000 UTC m=+5.839234243,LastTimestamp:2026-02-28 13:16:31.596913576 +0000 UTC m=+5.839234243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.009123 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b723e34e5a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.784388 +0000 UTC m=+6.026708667,LastTimestamp:2026-02-28 13:16:31.784388 +0000 UTC m=+6.026708667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.012492 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18986b723eca8007 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:31.794192391 +0000 UTC m=+6.036513058,LastTimestamp:2026-02-28 13:16:31.794192391 +0000 UTC m=+6.036513058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.017206 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 13:16:52 crc kubenswrapper[4897]: &Event{ObjectMeta:{kube-controller-manager-crc.18986b7285d15fbf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 28 13:16:52 crc kubenswrapper[4897]: body: Feb 28 13:16:52 crc kubenswrapper[4897]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:32.985825215 +0000 UTC m=+7.228145912,LastTimestamp:2026-02-28 13:16:32.985825215 +0000 UTC m=+7.228145912,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 13:16:52 crc kubenswrapper[4897]: > Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.024219 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b7285d28cc5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:32.985902277 +0000 UTC m=+7.228222974,LastTimestamp:2026-02-28 13:16:32.985902277 +0000 UTC m=+7.228222974,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.029045 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 13:16:52 crc kubenswrapper[4897]: &Event{ObjectMeta:{kube-apiserver-crc.18986b746374c61e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 28 13:16:52 crc kubenswrapper[4897]: body: [+]ping ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]log ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]etcd ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/generic-apiserver-start-informers ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-filter ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-apiextensions-informers ok Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-system-namespaces-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/start-service-ip-repair-controllers failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-kube-aggregator-informers ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]autoregister-completion ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapi-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: livez check failed Feb 28 13:16:52 crc kubenswrapper[4897]: Feb 28 13:16:52 crc kubenswrapper[4897]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:40.999265822 +0000 UTC m=+15.241586489,LastTimestamp:2026-02-28 13:16:40.999265822 +0000 UTC m=+15.241586489,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 13:16:52 crc kubenswrapper[4897]: > Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.033599 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b746375f05e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:40.999342174 +0000 UTC m=+15.241662841,LastTimestamp:2026-02-28 13:16:40.999342174 +0000 UTC m=+15.241662841,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.038137 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18986b746374c61e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 13:16:52 crc kubenswrapper[4897]: &Event{ObjectMeta:{kube-apiserver-crc.18986b746374c61e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 28 13:16:52 crc kubenswrapper[4897]: body: [+]ping ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]log ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]etcd ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/generic-apiserver-start-informers ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-filter ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-apiextensions-informers ok Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-system-namespaces-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/start-service-ip-repair-controllers failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/start-kube-aggregator-informers ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 28 13:16:52 crc kubenswrapper[4897]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]autoregister-completion ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapi-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 28 13:16:52 crc kubenswrapper[4897]: livez check failed Feb 28 13:16:52 crc kubenswrapper[4897]: Feb 28 13:16:52 crc kubenswrapper[4897]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:40.999265822 +0000 UTC m=+15.241586489,LastTimestamp:2026-02-28 13:16:41.005849821 +0000 UTC m=+15.248170488,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 13:16:52 crc kubenswrapper[4897]: > Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.044639 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18986b746375f05e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b746375f05e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:40.999342174 +0000 UTC m=+15.241662841,LastTimestamp:2026-02-28 13:16:41.005886422 +0000 UTC m=+15.248207089,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.050653 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 13:16:52 crc kubenswrapper[4897]: &Event{ObjectMeta:{kube-apiserver-crc.18986b7463e290b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:52738->192.168.126.11:17697: read: connection reset by peer Feb 28 13:16:52 crc kubenswrapper[4897]: body: Feb 28 13:16:52 crc kubenswrapper[4897]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:41.006461109 +0000 UTC m=+15.248781776,LastTimestamp:2026-02-28 13:16:41.006461109 +0000 UTC m=+15.248781776,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 13:16:52 crc kubenswrapper[4897]: > Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.056105 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b7463e30383 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52738->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:41.006490499 +0000 UTC m=+15.248811166,LastTimestamp:2026-02-28 13:16:41.006490499 +0000 UTC m=+15.248811166,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.060854 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18986b71b71a9760\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18986b71b71a9760 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:29.517739872 +0000 UTC m=+3.760060529,LastTimestamp:2026-02-28 13:16:41.585287475 +0000 UTC m=+15.827608152,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.065816 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 13:16:52 crc kubenswrapper[4897]: &Event{ObjectMeta:{kube-controller-manager-crc.18986b74d9e17e5d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 13:16:52 crc kubenswrapper[4897]: body: Feb 28 13:16:52 crc kubenswrapper[4897]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:42.986102365 +0000 UTC m=+17.228423052,LastTimestamp:2026-02-28 13:16:42.986102365 +0000 UTC m=+17.228423052,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 13:16:52 crc kubenswrapper[4897]: > Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.070661 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b74d9e28b6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:42.986171247 +0000 UTC m=+17.228491934,LastTimestamp:2026-02-28 13:16:42.986171247 +0000 UTC m=+17.228491934,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:52 crc kubenswrapper[4897]: W0228 13:16:52.144113 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.144226 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 13:16:52 crc kubenswrapper[4897]: W0228 13:16:52.217720 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.217807 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 13:16:52 crc kubenswrapper[4897]: W0228 13:16:52.294763 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.294828 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.407728 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:52 crc kubenswrapper[4897]: W0228 13:16:52.479694 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.479794 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.986011 4897 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.986134 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.986211 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.986432 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.987969 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.988036 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.988058 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.988893 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 28 13:16:52 crc kubenswrapper[4897]: I0228 13:16:52.989172 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422" gracePeriod=30 Feb 28 13:16:52 crc kubenswrapper[4897]: E0228 13:16:52.996480 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18986b74d9e17e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 13:16:52 crc kubenswrapper[4897]: &Event{ObjectMeta:{kube-controller-manager-crc.18986b74d9e17e5d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 13:16:52 crc kubenswrapper[4897]: body: Feb 28 13:16:52 crc kubenswrapper[4897]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:42.986102365 +0000 UTC m=+17.228423052,LastTimestamp:2026-02-28 13:16:52.986095406 +0000 UTC m=+27.228416103,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 13:16:52 crc kubenswrapper[4897]: > Feb 28 13:16:53 crc kubenswrapper[4897]: E0228 13:16:53.003637 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18986b74d9e28b6f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b74d9e28b6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:42.986171247 +0000 UTC m=+17.228491934,LastTimestamp:2026-02-28 13:16:52.986172359 +0000 UTC m=+27.228493046,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:53 crc kubenswrapper[4897]: E0228 13:16:53.011560 4897 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b772e1bd506 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:52.989146374 +0000 UTC m=+27.231467071,LastTimestamp:2026-02-28 13:16:52.989146374 +0000 UTC m=+27.231467071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:53 crc kubenswrapper[4897]: E0228 13:16:53.119710 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18986b713ed633d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b713ed633d0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.499992016 +0000 UTC m=+1.742312693,LastTimestamp:2026-02-28 13:16:53.111641109 +0000 UTC m=+27.353961806,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:53 crc kubenswrapper[4897]: E0228 13:16:53.383828 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18986b714f81d50b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b714f81d50b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.779675403 +0000 UTC m=+2.021996070,LastTimestamp:2026-02-28 13:16:53.377149969 +0000 UTC m=+27.619470656,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:53 crc kubenswrapper[4897]: E0228 13:16:53.396012 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18986b71502bbbec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b71502bbbec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:27.790810092 +0000 UTC m=+2.033130759,LastTimestamp:2026-02-28 13:16:53.38969536 +0000 UTC m=+27.632016027,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.406205 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.635838 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.636491 4897 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422" exitCode=255 Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.636556 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422"} Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.636598 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae"} Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.636702 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.637877 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.637948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:53 crc kubenswrapper[4897]: I0228 13:16:53.637972 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:54 crc kubenswrapper[4897]: I0228 13:16:54.397894 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:54 crc kubenswrapper[4897]: I0228 13:16:54.399683 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:54 crc kubenswrapper[4897]: I0228 13:16:54.399729 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:54 crc kubenswrapper[4897]: I0228 13:16:54.399749 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:16:54 crc kubenswrapper[4897]: I0228 13:16:54.399784 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:16:54 crc kubenswrapper[4897]: I0228 13:16:54.406759 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:54 crc kubenswrapper[4897]: E0228 13:16:54.406755 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 13:16:54 crc kubenswrapper[4897]: E0228 13:16:54.406952 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 13:16:55 crc kubenswrapper[4897]: I0228 13:16:55.405189 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:56 crc kubenswrapper[4897]: I0228 13:16:56.406430 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:56 crc kubenswrapper[4897]: E0228 13:16:56.528572 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:16:57 crc kubenswrapper[4897]: I0228 13:16:57.407941 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:58 crc kubenswrapper[4897]: I0228 13:16:58.409086 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:59 crc kubenswrapper[4897]: I0228 13:16:59.405508 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:16:59 crc kubenswrapper[4897]: I0228 13:16:59.985721 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:16:59 crc kubenswrapper[4897]: I0228 13:16:59.986045 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:16:59 crc kubenswrapper[4897]: I0228 13:16:59.987997 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:16:59 crc kubenswrapper[4897]: I0228 13:16:59.988073 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:16:59 crc kubenswrapper[4897]: I0228 13:16:59.988109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:00 crc kubenswrapper[4897]: I0228 13:17:00.407719 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.109115 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.109450 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.111443 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.111498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.111516 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.406050 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.407149 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.408813 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.408889 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.408903 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:01 crc kubenswrapper[4897]: I0228 13:17:01.408938 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:17:01 crc kubenswrapper[4897]: E0228 13:17:01.413394 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 13:17:01 crc kubenswrapper[4897]: E0228 13:17:01.414126 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 13:17:02 crc kubenswrapper[4897]: I0228 13:17:02.408119 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:02 crc kubenswrapper[4897]: I0228 13:17:02.455871 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:02 crc kubenswrapper[4897]: I0228 13:17:02.457510 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:02 crc kubenswrapper[4897]: I0228 13:17:02.457727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:02 crc kubenswrapper[4897]: I0228 13:17:02.457864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:02 crc kubenswrapper[4897]: I0228 13:17:02.458819 4897 scope.go:117] "RemoveContainer" containerID="d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f" Feb 28 13:17:02 crc kubenswrapper[4897]: I0228 13:17:02.986558 4897 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 13:17:02 crc kubenswrapper[4897]: I0228 13:17:02.986637 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 13:17:02 crc kubenswrapper[4897]: E0228 13:17:02.992542 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18986b74d9e17e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 13:17:02 crc kubenswrapper[4897]: &Event{ObjectMeta:{kube-controller-manager-crc.18986b74d9e17e5d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 13:17:02 crc kubenswrapper[4897]: body: Feb 28 13:17:02 crc kubenswrapper[4897]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:42.986102365 +0000 UTC m=+17.228423052,LastTimestamp:2026-02-28 13:17:02.986617015 +0000 UTC m=+37.228937702,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 13:17:02 crc kubenswrapper[4897]: > Feb 28 13:17:02 crc kubenswrapper[4897]: E0228 13:17:02.999039 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18986b74d9e28b6f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18986b74d9e28b6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:42.986171247 +0000 UTC m=+17.228491934,LastTimestamp:2026-02-28 13:17:02.986672596 +0000 UTC m=+37.228993283,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.407513 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.669143 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.670160 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.672364 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b71812d8b11ab02e0d82c52f16f115c32a274d0361993246fa047be2ab96d318" exitCode=255 Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.672398 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b71812d8b11ab02e0d82c52f16f115c32a274d0361993246fa047be2ab96d318"} Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.672459 4897 scope.go:117] "RemoveContainer" containerID="d24afa0c2de45f47ef3aec29b6bdf9a5b7f4d09146024b2f74230d912bd1129f" Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.672670 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.674030 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.674075 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.674094 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:03 crc kubenswrapper[4897]: I0228 13:17:03.674962 4897 scope.go:117] "RemoveContainer" containerID="b71812d8b11ab02e0d82c52f16f115c32a274d0361993246fa047be2ab96d318" Feb 28 13:17:03 crc kubenswrapper[4897]: E0228 13:17:03.675354 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:17:04 crc kubenswrapper[4897]: I0228 13:17:04.408516 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:04 crc kubenswrapper[4897]: I0228 13:17:04.676801 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 13:17:05 crc kubenswrapper[4897]: I0228 13:17:05.406382 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:06 crc kubenswrapper[4897]: I0228 13:17:06.406552 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:06 crc kubenswrapper[4897]: E0228 13:17:06.528797 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:17:07 crc kubenswrapper[4897]: I0228 13:17:07.407265 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:07 crc kubenswrapper[4897]: I0228 13:17:07.467930 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:17:07 crc kubenswrapper[4897]: I0228 13:17:07.468397 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:07 crc kubenswrapper[4897]: I0228 13:17:07.469977 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:07 crc kubenswrapper[4897]: I0228 13:17:07.470044 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:07 crc kubenswrapper[4897]: I0228 13:17:07.470067 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:07 crc kubenswrapper[4897]: I0228 13:17:07.470915 4897 scope.go:117] "RemoveContainer" containerID="b71812d8b11ab02e0d82c52f16f115c32a274d0361993246fa047be2ab96d318" Feb 28 13:17:07 crc kubenswrapper[4897]: E0228 13:17:07.471228 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:17:08 crc kubenswrapper[4897]: I0228 13:17:08.407609 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:08 crc kubenswrapper[4897]: I0228 13:17:08.413745 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:08 crc kubenswrapper[4897]: I0228 13:17:08.415262 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:08 crc kubenswrapper[4897]: I0228 13:17:08.415342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:08 crc kubenswrapper[4897]: I0228 13:17:08.415361 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:08 crc kubenswrapper[4897]: I0228 13:17:08.415395 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:17:08 crc kubenswrapper[4897]: E0228 13:17:08.421131 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 13:17:08 crc kubenswrapper[4897]: E0228 13:17:08.421642 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 13:17:09 crc kubenswrapper[4897]: I0228 13:17:09.401100 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:10 crc kubenswrapper[4897]: I0228 13:17:10.408428 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:10 crc kubenswrapper[4897]: I0228 13:17:10.977520 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:17:10 crc kubenswrapper[4897]: I0228 13:17:10.978134 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:10 crc kubenswrapper[4897]: I0228 13:17:10.980112 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:10 crc kubenswrapper[4897]: I0228 13:17:10.980405 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:10 crc kubenswrapper[4897]: I0228 13:17:10.980622 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:10 crc kubenswrapper[4897]: I0228 13:17:10.981800 4897 scope.go:117] "RemoveContainer" containerID="b71812d8b11ab02e0d82c52f16f115c32a274d0361993246fa047be2ab96d318" Feb 28 13:17:10 crc kubenswrapper[4897]: E0228 13:17:10.982352 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:17:11 crc kubenswrapper[4897]: W0228 13:17:11.322563 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 28 13:17:11 crc kubenswrapper[4897]: E0228 13:17:11.322671 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 13:17:11 crc kubenswrapper[4897]: W0228 13:17:11.341604 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:11 crc kubenswrapper[4897]: E0228 13:17:11.341705 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 13:17:11 crc kubenswrapper[4897]: I0228 13:17:11.406962 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:12 crc kubenswrapper[4897]: I0228 13:17:12.408644 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:12 crc kubenswrapper[4897]: I0228 13:17:12.987012 4897 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 13:17:12 crc kubenswrapper[4897]: I0228 13:17:12.987109 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 13:17:12 crc kubenswrapper[4897]: E0228 13:17:12.994300 4897 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18986b74d9e17e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 13:17:12 crc kubenswrapper[4897]: &Event{ObjectMeta:{kube-controller-manager-crc.18986b74d9e17e5d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 13:17:12 crc kubenswrapper[4897]: body: Feb 28 13:17:12 crc kubenswrapper[4897]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:16:42.986102365 +0000 UTC m=+17.228423052,LastTimestamp:2026-02-28 13:17:12.98707654 +0000 UTC m=+47.229397237,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 13:17:12 crc kubenswrapper[4897]: > Feb 28 13:17:13 crc kubenswrapper[4897]: W0228 13:17:13.252460 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 28 13:17:13 crc kubenswrapper[4897]: E0228 13:17:13.252538 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 13:17:13 crc kubenswrapper[4897]: I0228 13:17:13.407128 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:14 crc kubenswrapper[4897]: I0228 13:17:14.314647 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 13:17:14 crc kubenswrapper[4897]: I0228 13:17:14.314847 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:14 crc kubenswrapper[4897]: I0228 13:17:14.316249 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:14 crc kubenswrapper[4897]: I0228 13:17:14.316299 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:14 crc kubenswrapper[4897]: I0228 13:17:14.316326 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:14 crc kubenswrapper[4897]: I0228 13:17:14.404588 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:15 crc kubenswrapper[4897]: I0228 13:17:15.407172 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:15 crc kubenswrapper[4897]: I0228 13:17:15.422396 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:15 crc kubenswrapper[4897]: I0228 13:17:15.424301 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:15 crc kubenswrapper[4897]: I0228 13:17:15.424412 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:15 crc kubenswrapper[4897]: I0228 13:17:15.424438 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:15 crc kubenswrapper[4897]: I0228 13:17:15.424482 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:17:15 crc kubenswrapper[4897]: E0228 13:17:15.429708 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 13:17:15 crc kubenswrapper[4897]: E0228 13:17:15.429821 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 13:17:16 crc kubenswrapper[4897]: I0228 13:17:16.409300 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:16 crc kubenswrapper[4897]: E0228 13:17:16.529556 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:17:17 crc kubenswrapper[4897]: W0228 13:17:17.257419 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 28 13:17:17 crc kubenswrapper[4897]: E0228 13:17:17.257491 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 13:17:17 crc kubenswrapper[4897]: I0228 13:17:17.406996 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:18 crc kubenswrapper[4897]: I0228 13:17:18.410929 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:19 crc kubenswrapper[4897]: I0228 13:17:19.408050 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:19 crc kubenswrapper[4897]: I0228 13:17:19.994416 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:17:19 crc kubenswrapper[4897]: I0228 13:17:19.994672 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:19 crc kubenswrapper[4897]: I0228 13:17:19.996379 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:19 crc kubenswrapper[4897]: I0228 13:17:19.996441 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:19 crc kubenswrapper[4897]: I0228 13:17:19.996462 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:20 crc kubenswrapper[4897]: I0228 13:17:20.000208 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:17:20 crc kubenswrapper[4897]: I0228 13:17:20.404980 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:20 crc kubenswrapper[4897]: I0228 13:17:20.723938 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:20 crc kubenswrapper[4897]: I0228 13:17:20.725229 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:20 crc kubenswrapper[4897]: I0228 13:17:20.725459 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:20 crc kubenswrapper[4897]: I0228 13:17:20.725606 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:21 crc kubenswrapper[4897]: I0228 13:17:21.407829 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.405249 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.430533 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.431724 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.431766 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.431777 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.431801 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:17:22 crc kubenswrapper[4897]: E0228 13:17:22.434527 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 13:17:22 crc kubenswrapper[4897]: E0228 13:17:22.436769 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.456220 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.457271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.457330 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.457343 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:22 crc kubenswrapper[4897]: I0228 13:17:22.457899 4897 scope.go:117] "RemoveContainer" containerID="b71812d8b11ab02e0d82c52f16f115c32a274d0361993246fa047be2ab96d318" Feb 28 13:17:22 crc kubenswrapper[4897]: E0228 13:17:22.458088 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:17:23 crc kubenswrapper[4897]: I0228 13:17:23.406651 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:24 crc kubenswrapper[4897]: I0228 13:17:24.407592 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:25 crc kubenswrapper[4897]: I0228 13:17:25.405458 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:26 crc kubenswrapper[4897]: I0228 13:17:26.407367 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:26 crc kubenswrapper[4897]: E0228 13:17:26.529792 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:17:27 crc kubenswrapper[4897]: I0228 13:17:27.406299 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:28 crc kubenswrapper[4897]: I0228 13:17:28.408234 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:29 crc kubenswrapper[4897]: I0228 13:17:29.406191 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:29 crc kubenswrapper[4897]: I0228 13:17:29.434602 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:29 crc kubenswrapper[4897]: I0228 13:17:29.436400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:29 crc kubenswrapper[4897]: I0228 13:17:29.436521 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:29 crc kubenswrapper[4897]: I0228 13:17:29.436602 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:29 crc kubenswrapper[4897]: I0228 13:17:29.436697 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:17:29 crc kubenswrapper[4897]: E0228 13:17:29.446068 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 13:17:29 crc kubenswrapper[4897]: E0228 13:17:29.450472 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 13:17:30 crc kubenswrapper[4897]: I0228 13:17:30.407441 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 13:17:30 crc kubenswrapper[4897]: I0228 13:17:30.873598 4897 csr.go:261] certificate signing request csr-wch22 is approved, waiting to be issued Feb 28 13:17:30 crc kubenswrapper[4897]: I0228 13:17:30.881706 4897 csr.go:257] certificate signing request csr-wch22 is issued Feb 28 13:17:30 crc kubenswrapper[4897]: I0228 13:17:30.899769 4897 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 28 13:17:31 crc kubenswrapper[4897]: I0228 13:17:31.239403 4897 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 28 13:17:31 crc kubenswrapper[4897]: I0228 13:17:31.883247 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-02 17:57:08.975698647 +0000 UTC Feb 28 13:17:31 crc kubenswrapper[4897]: I0228 13:17:31.883374 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7396h39m37.092333051s for next certificate rotation Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.456284 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.457691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.457752 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.457772 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.458639 4897 scope.go:117] "RemoveContainer" containerID="b71812d8b11ab02e0d82c52f16f115c32a274d0361993246fa047be2ab96d318" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.760689 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.763953 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a"} Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.764253 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.765543 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.765585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:34 crc kubenswrapper[4897]: I0228 13:17:34.765604 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.769214 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.769887 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.772407 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" exitCode=255 Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.772437 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a"} Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.772467 4897 scope.go:117] "RemoveContainer" containerID="b71812d8b11ab02e0d82c52f16f115c32a274d0361993246fa047be2ab96d318" Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.772580 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.773632 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.773680 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.773699 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:35 crc kubenswrapper[4897]: I0228 13:17:35.774519 4897 scope.go:117] "RemoveContainer" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" Feb 28 13:17:35 crc kubenswrapper[4897]: E0228 13:17:35.774790 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.451372 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.452986 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.453063 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.453083 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.453259 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.462408 4897 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.462704 4897 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.462737 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.468392 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.468459 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.468484 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.468512 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.468535 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:36Z","lastTransitionTime":"2026-02-28T13:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.482657 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.495048 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.495109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.495142 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.495167 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.495187 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:36Z","lastTransitionTime":"2026-02-28T13:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.514949 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.521456 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.521514 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.521535 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.521556 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.521574 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:36Z","lastTransitionTime":"2026-02-28T13:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.530198 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.535927 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.542375 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.542432 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.542447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.542471 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.542489 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:36Z","lastTransitionTime":"2026-02-28T13:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.556791 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.556953 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.556980 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.657641 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.758772 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:36 crc kubenswrapper[4897]: I0228 13:17:36.777396 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.859409 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:36 crc kubenswrapper[4897]: E0228 13:17:36.960031 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.060669 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.160916 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.262091 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.362902 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.463917 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: I0228 13:17:37.467187 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:17:37 crc kubenswrapper[4897]: I0228 13:17:37.467493 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:37 crc kubenswrapper[4897]: I0228 13:17:37.468911 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:37 crc kubenswrapper[4897]: I0228 13:17:37.468970 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:37 crc kubenswrapper[4897]: I0228 13:17:37.469005 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:37 crc kubenswrapper[4897]: I0228 13:17:37.470072 4897 scope.go:117] "RemoveContainer" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.470389 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.565284 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.666158 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.766904 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.867818 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:37 crc kubenswrapper[4897]: E0228 13:17:37.968355 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.068900 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.169584 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: I0228 13:17:38.217516 4897 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.270629 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.371280 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.471911 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.572999 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.674108 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.775241 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.876239 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:38 crc kubenswrapper[4897]: E0228 13:17:38.976396 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.077333 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.178482 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.278695 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.378807 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.479627 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.580349 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.681381 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.782107 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.883151 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:39 crc kubenswrapper[4897]: E0228 13:17:39.983379 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.084507 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.184599 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.285244 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.386229 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.486611 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.586734 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.687264 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.787382 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.887720 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:40 crc kubenswrapper[4897]: I0228 13:17:40.977828 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:17:40 crc kubenswrapper[4897]: I0228 13:17:40.978057 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:40 crc kubenswrapper[4897]: I0228 13:17:40.979548 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:40 crc kubenswrapper[4897]: I0228 13:17:40.979595 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:40 crc kubenswrapper[4897]: I0228 13:17:40.979612 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:40 crc kubenswrapper[4897]: I0228 13:17:40.980523 4897 scope.go:117] "RemoveContainer" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.980809 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:17:40 crc kubenswrapper[4897]: E0228 13:17:40.988821 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.089220 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.189656 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.290283 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.390471 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.491346 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.591613 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.691962 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.792625 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.893772 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:41 crc kubenswrapper[4897]: E0228 13:17:41.994724 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.095893 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.196449 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.297522 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.398067 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.498562 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.599658 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.699939 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.800810 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:42 crc kubenswrapper[4897]: E0228 13:17:42.900952 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.001380 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.102517 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.202665 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.302762 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.403467 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.503630 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.604564 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.704765 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.805498 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:43 crc kubenswrapper[4897]: E0228 13:17:43.906151 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.006485 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.106903 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.208031 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.308490 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.409412 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.510258 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.611199 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.711377 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.811897 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:44 crc kubenswrapper[4897]: E0228 13:17:44.912745 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.013802 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.113940 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.214126 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.314640 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.415681 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.516268 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.616890 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.717464 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.817775 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:45 crc kubenswrapper[4897]: E0228 13:17:45.918503 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.019559 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.120186 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.220755 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.321764 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.422499 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.523337 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.530643 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.623835 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.724750 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.824874 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.912125 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.917727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.917797 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.917820 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.918219 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.918270 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:46Z","lastTransitionTime":"2026-02-28T13:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.930067 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.934627 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.934670 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.934680 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.934697 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.934711 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:46Z","lastTransitionTime":"2026-02-28T13:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.950489 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.956013 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.956098 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.956119 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.956174 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.956194 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:46Z","lastTransitionTime":"2026-02-28T13:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.972714 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.978575 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.978660 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.978677 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.978701 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:46 crc kubenswrapper[4897]: I0228 13:17:46.978763 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:46Z","lastTransitionTime":"2026-02-28T13:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.998207 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.998474 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:17:46 crc kubenswrapper[4897]: E0228 13:17:46.998521 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.098929 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.199731 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.300182 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.401356 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.501634 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.601885 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.702333 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.802439 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:47 crc kubenswrapper[4897]: E0228 13:17:47.903378 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.003617 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.103797 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.204584 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.305212 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.405680 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.506716 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.607530 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.708022 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.808586 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:48 crc kubenswrapper[4897]: E0228 13:17:48.909069 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.009386 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.110395 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.210913 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.311233 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.411982 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.512671 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.613566 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.714705 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.815376 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:49 crc kubenswrapper[4897]: E0228 13:17:49.916197 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.016493 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.117564 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.218076 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.318365 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.419422 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.520188 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.621121 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.722337 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.822784 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:50 crc kubenswrapper[4897]: E0228 13:17:50.923213 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:51 crc kubenswrapper[4897]: E0228 13:17:51.023786 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:51 crc kubenswrapper[4897]: E0228 13:17:51.124545 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:51 crc kubenswrapper[4897]: E0228 13:17:51.225412 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:51 crc kubenswrapper[4897]: E0228 13:17:51.882964 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:51 crc kubenswrapper[4897]: E0228 13:17:51.983108 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.084266 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.184705 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.285467 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.385635 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.486808 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.587414 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.687764 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.788176 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.888352 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:52 crc kubenswrapper[4897]: E0228 13:17:52.989372 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.090122 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.190899 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.291119 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.391954 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.492538 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.592931 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.694095 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.794290 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.894777 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:53 crc kubenswrapper[4897]: E0228 13:17:53.995719 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.096422 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.197524 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.297892 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.398840 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: I0228 13:17:54.456068 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:54 crc kubenswrapper[4897]: I0228 13:17:54.457716 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:54 crc kubenswrapper[4897]: I0228 13:17:54.457813 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:54 crc kubenswrapper[4897]: I0228 13:17:54.457830 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:54 crc kubenswrapper[4897]: I0228 13:17:54.459192 4897 scope.go:117] "RemoveContainer" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.459472 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.499000 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.599982 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.701137 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.802121 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:54 crc kubenswrapper[4897]: E0228 13:17:54.902264 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.002481 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.102615 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.203151 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.304501 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.405417 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.505815 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.606760 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: I0228 13:17:55.619970 4897 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.707400 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.808054 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:55 crc kubenswrapper[4897]: E0228 13:17:55.908704 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.009778 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.109935 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.210060 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.310941 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.411457 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: I0228 13:17:56.455868 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 13:17:56 crc kubenswrapper[4897]: I0228 13:17:56.457422 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:56 crc kubenswrapper[4897]: I0228 13:17:56.457484 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:56 crc kubenswrapper[4897]: I0228 13:17:56.457503 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.511847 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.531492 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.612154 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.712751 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.813103 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:56 crc kubenswrapper[4897]: E0228 13:17:56.913822 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.013965 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.085856 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.090783 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.090838 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.090858 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.090880 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.090897 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:57Z","lastTransitionTime":"2026-02-28T13:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.106624 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.112543 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.112596 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.112613 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.112634 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.112651 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:57Z","lastTransitionTime":"2026-02-28T13:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.129011 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.134917 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.134958 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.134970 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.134986 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.134998 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:57Z","lastTransitionTime":"2026-02-28T13:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.150366 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.155427 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.155479 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.155502 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.155530 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:17:57 crc kubenswrapper[4897]: I0228 13:17:57.155554 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:17:57Z","lastTransitionTime":"2026-02-28T13:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.170175 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.170483 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.170535 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.271193 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.372391 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.472814 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.572925 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.673979 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.774541 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.874878 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:57 crc kubenswrapper[4897]: E0228 13:17:57.975379 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.076415 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.177511 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.278195 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.379131 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.480046 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.580604 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.681716 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.782025 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.882716 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:58 crc kubenswrapper[4897]: E0228 13:17:58.983240 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.084423 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.184866 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.285540 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.386657 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.486867 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.586975 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.687582 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.788070 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.888621 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:17:59 crc kubenswrapper[4897]: E0228 13:17:59.988994 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.089445 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.189728 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.290789 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.391115 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.491432 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.592488 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.693273 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.794242 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.894636 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:00 crc kubenswrapper[4897]: E0228 13:18:00.995528 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.096333 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.196583 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.297537 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.397893 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.498998 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.599877 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.700687 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.800848 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:01 crc kubenswrapper[4897]: E0228 13:18:01.901109 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.001713 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.102321 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.202511 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.302742 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.403220 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.504141 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.567508 4897 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.606788 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.606838 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.606857 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.606881 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.606897 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:02Z","lastTransitionTime":"2026-02-28T13:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.709139 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.709194 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.709211 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.709235 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.709255 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:02Z","lastTransitionTime":"2026-02-28T13:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.811370 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.811429 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.811450 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.811477 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.811498 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:02Z","lastTransitionTime":"2026-02-28T13:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.877239 4897 apiserver.go:52] "Watching apiserver" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.884427 4897 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.885011 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8n99q","openshift-multus/multus-additional-cni-plugins-zj7fc","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94","openshift-dns/node-resolver-kb42x","openshift-multus/multus-k4m7f","openshift-network-node-identity/network-node-identity-vrzqb","openshift-ovn-kubernetes/ovnkube-node-rjlcm","openshift-multus/network-metrics-daemon-5tms6","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-machine-config-operator/machine-config-daemon-brq22","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.885437 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.885522 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.885649 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.885727 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.885741 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.885769 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.886476 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.886519 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.885848 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.886636 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.886991 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.888725 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.888774 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.888978 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.889104 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.889136 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.889154 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.889184 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.889169 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.891038 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.891090 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.891141 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.891248 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.892369 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.892552 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.893195 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.893280 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.893807 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.894419 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.896371 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kb42x" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.896558 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.896398 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.897055 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.897065 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.898758 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.899076 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.899133 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.899436 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.899528 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.902918 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.902986 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903131 4897 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903238 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903385 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903400 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903603 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903606 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903702 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903895 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.903905 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.906575 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.910156 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.910986 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.911117 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.911443 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.912990 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.917585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.917640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.917658 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.917682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.917700 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:02Z","lastTransitionTime":"2026-02-28T13:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.919045 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931504 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931566 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931602 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931579 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931634 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931677 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931707 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931736 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931780 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931792 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931811 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931845 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931878 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931910 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931941 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.931970 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932003 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932032 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932063 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932093 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932125 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932154 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932184 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932129 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932217 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932252 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932285 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932344 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932376 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932408 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932439 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932470 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932492 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932502 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932537 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932569 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932600 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932633 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932662 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932735 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932768 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932822 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932857 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932890 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934128 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934193 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934224 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934254 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934288 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934344 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934383 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934421 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934460 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.932768 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.933027 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.933205 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.933493 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.933624 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.933686 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.933940 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934014 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934414 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934464 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934822 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935002 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936023 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936066 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936097 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936128 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936158 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936191 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936224 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936260 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936298 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936356 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936388 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936419 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936449 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936491 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936522 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936557 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936588 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936620 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936651 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936684 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936714 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936746 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936777 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936809 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936841 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936872 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936904 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936937 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936988 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937022 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937053 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937087 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937119 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937153 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937188 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937220 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937249 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937282 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937338 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937369 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937401 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937433 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937466 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937498 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937530 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937563 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937597 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937635 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937668 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937699 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937732 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937765 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937794 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937826 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937858 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937894 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937925 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937956 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.937988 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938022 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938054 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938089 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938123 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938159 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938190 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938225 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938256 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938288 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938346 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938384 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938416 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938447 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938478 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938511 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938542 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938588 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938623 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938657 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938692 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938731 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938764 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938796 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938830 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938864 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938897 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938932 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.938968 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939004 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939038 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939070 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939103 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939137 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939171 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939202 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939235 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939270 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939303 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939360 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939499 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939552 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939588 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939624 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939666 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939703 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939737 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939771 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939809 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939845 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939881 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939915 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939950 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.939987 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940024 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940057 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940092 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940135 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940168 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940203 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940239 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940277 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940339 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940411 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940448 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940482 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940517 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940551 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940587 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940622 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940660 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940697 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940732 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940766 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940801 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940836 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940870 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940905 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940940 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941002 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941050 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935019 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941860 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941935 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935020 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.934945 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935121 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935157 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935217 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935462 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935583 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935106 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935744 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.935851 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.942011 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936028 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.936057 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.940936 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941034 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941489 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941508 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941654 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.941665 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.942465 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.942462 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.942493 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.942680 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.942722 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.942897 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943014 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943080 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943111 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943366 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943412 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943540 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943426 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943450 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943684 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943961 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943974 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.944524 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.946505 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.944918 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.945984 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.946391 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.946394 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.943058 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.946799 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.946862 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.946916 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.946979 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947027 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947037 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947102 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947178 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vkx\" (UniqueName: \"kubernetes.io/projected/6b8a404d-b143-4bf3-b590-c1b482f38f6f-kube-api-access-m2vkx\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947446 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947509 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-ovn\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947551 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947587 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4zg\" (UniqueName: \"kubernetes.io/projected/3ce402ca-1bea-4568-85cd-fb4a726f3c92-kube-api-access-fs4zg\") pod \"node-resolver-kb42x\" (UID: \"3ce402ca-1bea-4568-85cd-fb4a726f3c92\") " pod="openshift-dns/node-resolver-kb42x" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947620 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvjv\" (UniqueName: \"kubernetes.io/projected/cd164967-b99b-47d0-a691-7d8118fa81ce-kube-api-access-7mvjv\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947650 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-systemd-units\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947682 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-node-log\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947677 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947712 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-log-socket\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947750 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947788 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947826 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947863 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-config\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947874 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947894 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7844e4a2-e296-46c1-b047-ace0be3d95bb-serviceca\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947929 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947961 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-hostroot\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.947993 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948073 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cnibin\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948155 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948191 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd164967-b99b-47d0-a691-7d8118fa81ce-cni-binary-copy\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948224 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-netns\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948299 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-k8s-cni-cncf-io\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948359 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-kubelet\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948393 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-script-lib\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948430 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6plnv\" (UniqueName: \"kubernetes.io/projected/7844e4a2-e296-46c1-b047-ace0be3d95bb-kube-api-access-6plnv\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948448 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948624 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.949574 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.949604 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.949635 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.949886 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:18:03.449863982 +0000 UTC m=+97.692184769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.950218 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.950486 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.950756 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.950929 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.951790 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.951746 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.952384 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.952568 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.953071 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.953091 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.953884 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.953915 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.954026 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.954059 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.954142 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.954350 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.954578 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.954706 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.955547 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.955455 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.956187 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.956191 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.956715 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.956994 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.957023 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.957023 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.957046 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.957269 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.957444 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.957612 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.957802 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.957836 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.958012 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.958032 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.958500 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.958530 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.958956 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.959174 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.959227 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.959469 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.959525 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.959884 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.960714 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.961189 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.961370 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.961444 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.961738 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.962095 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.962152 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.962227 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.962304 4897 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.962499 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.962546 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.960642 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.962773 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.962622 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.963509 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.963923 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.964041 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.964875 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.965392 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.965430 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.965622 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.966119 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.966295 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.966409 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.966643 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.966811 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.967465 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.968211 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.968845 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.968955 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.969119 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.969515 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.948462 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.969569 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970246 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970405 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljj4q\" (UniqueName: \"kubernetes.io/projected/a273d93c-239a-444c-83cf-2c4ce34fa47b-kube-api-access-ljj4q\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970442 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-etc-kubernetes\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970468 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-ovn-kubernetes\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970495 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c4091e4-3a55-4913-81f3-026a1a97c57c-proxy-tls\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970523 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970548 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gz5x\" (UniqueName: \"kubernetes.io/projected/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-kube-api-access-2gz5x\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970574 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-multus-certs\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970597 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-netd\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970625 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbfw\" (UniqueName: \"kubernetes.io/projected/0e63af1c-1b83-44b6-9872-2dfefa37d433-kube-api-access-gwbfw\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970650 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-os-release\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970678 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970704 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-cni-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970732 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970760 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c4091e4-3a55-4913-81f3-026a1a97c57c-mcd-auth-proxy-config\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970789 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a273d93c-239a-444c-83cf-2c4ce34fa47b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970816 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-daemon-config\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970840 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3ce402ca-1bea-4568-85cd-fb4a726f3c92-hosts-file\") pod \"node-resolver-kb42x\" (UID: \"3ce402ca-1bea-4568-85cd-fb4a726f3c92\") " pod="openshift-dns/node-resolver-kb42x" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970864 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a273d93c-239a-444c-83cf-2c4ce34fa47b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970889 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-cnibin\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970914 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-netns\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970934 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-bin\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970959 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovn-node-metrics-cert\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970983 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971004 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-conf-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971031 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh6dl\" (UniqueName: \"kubernetes.io/projected/6c4091e4-3a55-4913-81f3-026a1a97c57c-kube-api-access-wh6dl\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971059 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971083 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c4091e4-3a55-4913-81f3-026a1a97c57c-rootfs\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971105 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-cni-bin\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971126 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-cni-multus\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-kubelet\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971171 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-etc-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971198 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971225 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a273d93c-239a-444c-83cf-2c4ce34fa47b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971248 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-slash\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971270 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-socket-dir-parent\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971293 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-var-lib-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971335 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-system-cni-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971449 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7844e4a2-e296-46c1-b047-ace0be3d95bb-host\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971477 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971504 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971528 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-env-overrides\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971551 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971577 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-system-cni-dir\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971601 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-os-release\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971621 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-systemd\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.970190 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.971203 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.971800 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.971842 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:03.471828104 +0000 UTC m=+97.714148761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.971895 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.972437 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:03.472428312 +0000 UTC m=+97.714748959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972565 4897 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972585 4897 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972597 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972606 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972617 4897 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972627 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972637 4897 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972646 4897 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972655 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972664 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972674 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972683 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972693 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972703 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972713 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972722 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972731 4897 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972740 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972750 4897 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972759 4897 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972769 4897 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972779 4897 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972789 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972798 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972807 4897 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972816 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972825 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972834 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972845 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972855 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972865 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972874 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972884 4897 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972892 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972903 4897 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972912 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972921 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972929 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972937 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972946 4897 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972954 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972963 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972972 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.972997 4897 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973007 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973015 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973025 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973034 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973042 4897 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973051 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973060 4897 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973068 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973077 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973086 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973095 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973140 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973149 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973158 4897 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973168 4897 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973177 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973186 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973195 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973204 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973213 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973222 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973230 4897 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973239 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973247 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973257 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973265 4897 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973274 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973283 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973292 4897 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973300 4897 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973321 4897 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973330 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973339 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973349 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973357 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973366 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973374 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973383 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973391 4897 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973400 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973408 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973417 4897 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973426 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973435 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973444 4897 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973454 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973462 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973398 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.973472 4897 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974031 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974074 4897 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974099 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974121 4897 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974141 4897 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974162 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974184 4897 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974205 4897 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974225 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974246 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974267 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974287 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974335 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974357 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974378 4897 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974398 4897 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974418 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974438 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974458 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974478 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974501 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974522 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974543 4897 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974564 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974585 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974608 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974630 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974649 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974670 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974691 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974713 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974734 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974754 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974773 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974793 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974816 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974835 4897 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974857 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974877 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974896 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974916 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974935 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974956 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974977 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.974997 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.975016 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.981577 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.982302 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.983232 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.983452 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.983482 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.983517 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.983613 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:03.483583773 +0000 UTC m=+97.725904470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.984192 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.984598 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.984972 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.984958 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.985878 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.986104 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.986415 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.986748 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987178 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987571 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987345 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987359 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.987540 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.987930 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.987952 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987600 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987594 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987620 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987686 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.987710 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.988042 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: E0228 13:18:02.988061 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:03.487996754 +0000 UTC m=+97.730317421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.988069 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.988127 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.988225 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.988275 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.988680 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.988713 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.989025 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.992398 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.992682 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.993061 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.993129 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.993357 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.993544 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.996029 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.996388 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.996463 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.996577 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:02 crc kubenswrapper[4897]: I0228 13:18:02.999349 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.000002 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.003396 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.003563 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.007469 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.011528 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.011802 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.011936 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.012240 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.012241 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.012924 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.014414 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.015380 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.015436 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.015454 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.015915 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.015933 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.016453 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.018394 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.019421 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.020359 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.022114 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.022151 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.022164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.022181 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.022199 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.030153 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.030802 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.033999 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.043125 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.051658 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.055920 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.061336 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.071698 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075766 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-env-overrides\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075805 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075833 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-os-release\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075856 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-systemd\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075884 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-system-cni-dir\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-ovn\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075945 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075968 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2vkx\" (UniqueName: \"kubernetes.io/projected/6b8a404d-b143-4bf3-b590-c1b482f38f6f-kube-api-access-m2vkx\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.075990 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-systemd-units\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076010 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-node-log\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076031 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-log-socket\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076053 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076074 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4zg\" (UniqueName: \"kubernetes.io/projected/3ce402ca-1bea-4568-85cd-fb4a726f3c92-kube-api-access-fs4zg\") pod \"node-resolver-kb42x\" (UID: \"3ce402ca-1bea-4568-85cd-fb4a726f3c92\") " pod="openshift-dns/node-resolver-kb42x" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076098 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvjv\" (UniqueName: \"kubernetes.io/projected/cd164967-b99b-47d0-a691-7d8118fa81ce-kube-api-access-7mvjv\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076121 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-config\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076143 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7844e4a2-e296-46c1-b047-ace0be3d95bb-serviceca\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076198 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cnibin\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076229 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd164967-b99b-47d0-a691-7d8118fa81ce-cni-binary-copy\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076248 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-netns\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076269 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-hostroot\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076289 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076320 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-env-overrides\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076330 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-script-lib\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076354 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6plnv\" (UniqueName: \"kubernetes.io/projected/7844e4a2-e296-46c1-b047-ace0be3d95bb-kube-api-access-6plnv\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076376 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076418 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-k8s-cni-cncf-io\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076440 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-kubelet\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076461 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-ovn-kubernetes\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076483 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c4091e4-3a55-4913-81f3-026a1a97c57c-proxy-tls\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076509 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076530 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gz5x\" (UniqueName: \"kubernetes.io/projected/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-kube-api-access-2gz5x\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076560 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljj4q\" (UniqueName: \"kubernetes.io/projected/a273d93c-239a-444c-83cf-2c4ce34fa47b-kube-api-access-ljj4q\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076570 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-os-release\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076581 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-etc-kubernetes\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076603 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-systemd\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076604 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076629 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-system-cni-dir\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076605 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbfw\" (UniqueName: \"kubernetes.io/projected/0e63af1c-1b83-44b6-9872-2dfefa37d433-kube-api-access-gwbfw\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076676 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-os-release\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076714 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-cni-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076740 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-multus-certs\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076763 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-netd\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076785 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a273d93c-239a-444c-83cf-2c4ce34fa47b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076809 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-daemon-config\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076845 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c4091e4-3a55-4913-81f3-026a1a97c57c-mcd-auth-proxy-config\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076867 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-cnibin\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076888 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-netns\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076909 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-bin\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076931 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3ce402ca-1bea-4568-85cd-fb4a726f3c92-hosts-file\") pod \"node-resolver-kb42x\" (UID: \"3ce402ca-1bea-4568-85cd-fb4a726f3c92\") " pod="openshift-dns/node-resolver-kb42x" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076950 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-ovn\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.076953 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a273d93c-239a-444c-83cf-2c4ce34fa47b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077083 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovn-node-metrics-cert\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077111 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077135 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-conf-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077157 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh6dl\" (UniqueName: \"kubernetes.io/projected/6c4091e4-3a55-4913-81f3-026a1a97c57c-kube-api-access-wh6dl\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-cni-bin\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077201 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-cni-multus\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077223 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-kubelet\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077246 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077275 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c4091e4-3a55-4913-81f3-026a1a97c57c-rootfs\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077297 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a273d93c-239a-444c-83cf-2c4ce34fa47b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077339 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-slash\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077361 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-etc-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077398 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-system-cni-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077411 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-config\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077420 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-socket-dir-parent\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077455 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-var-lib-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077476 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7844e4a2-e296-46c1-b047-ace0be3d95bb-host\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077519 4897 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077534 4897 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077545 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077554 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077564 4897 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077581 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-socket-dir-parent\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077575 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077612 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077624 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077634 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077646 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077660 4897 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077670 4897 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077679 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077688 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077697 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077706 4897 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077716 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077725 4897 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077733 4897 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077743 4897 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077752 4897 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077760 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077769 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077778 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077786 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077795 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077803 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077813 4897 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077824 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077833 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077843 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077855 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077921 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-netns\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077949 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-multus-certs\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077982 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-cni-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078017 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-cnibin\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.077931 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-os-release\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078088 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-node-log\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078114 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c4091e4-3a55-4913-81f3-026a1a97c57c-rootfs\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078145 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-log-socket\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-systemd-units\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078203 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078196 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-cni-bin\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078251 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-netd\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078278 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078403 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078463 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-slash\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078474 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-conf-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078499 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-netns\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078469 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-system-cni-dir\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078517 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-kubelet\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078638 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-cnibin\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078679 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a273d93c-239a-444c-83cf-2c4ce34fa47b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078698 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078724 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078209 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-var-lib-cni-multus\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078756 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-hostroot\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078787 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a273d93c-239a-444c-83cf-2c4ce34fa47b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078809 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-etc-kubernetes\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078825 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7844e4a2-e296-46c1-b047-ace0be3d95bb-host\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078865 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-ovn-kubernetes\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078881 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cd164967-b99b-47d0-a691-7d8118fa81ce-host-run-k8s-cni-cncf-io\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078891 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-bin\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078909 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078933 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b8a404d-b143-4bf3-b590-c1b482f38f6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.078979 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-kubelet\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.079000 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-var-lib-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.079036 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.079076 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs podName:8b95b3e0-28e1-4b26-86a3-bd61c5528b3e nodeName:}" failed. No retries permitted until 2026-02-28 13:18:03.579064119 +0000 UTC m=+97.821384776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs") pod "network-metrics-daemon-5tms6" (UID: "8b95b3e0-28e1-4b26-86a3-bd61c5528b3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.079088 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-etc-openvswitch\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.079113 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3ce402ca-1bea-4568-85cd-fb4a726f3c92-hosts-file\") pod \"node-resolver-kb42x\" (UID: \"3ce402ca-1bea-4568-85cd-fb4a726f3c92\") " pod="openshift-dns/node-resolver-kb42x" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.079173 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-script-lib\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.079251 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.079432 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cd164967-b99b-47d0-a691-7d8118fa81ce-multus-daemon-config\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.079520 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd164967-b99b-47d0-a691-7d8118fa81ce-cni-binary-copy\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080068 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7844e4a2-e296-46c1-b047-ace0be3d95bb-serviceca\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080140 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c4091e4-3a55-4913-81f3-026a1a97c57c-mcd-auth-proxy-config\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080284 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080341 4897 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080438 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080571 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080647 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080665 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080679 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080693 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080729 4897 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080744 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080757 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080769 4897 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080782 4897 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080815 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080828 4897 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080840 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080853 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080887 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080901 4897 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080914 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080926 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080940 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.080973 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.088523 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c4091e4-3a55-4913-81f3-026a1a97c57c-proxy-tls\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.088922 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a273d93c-239a-444c-83cf-2c4ce34fa47b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.095012 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovn-node-metrics-cert\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.097410 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4zg\" (UniqueName: \"kubernetes.io/projected/3ce402ca-1bea-4568-85cd-fb4a726f3c92-kube-api-access-fs4zg\") pod \"node-resolver-kb42x\" (UID: \"3ce402ca-1bea-4568-85cd-fb4a726f3c92\") " pod="openshift-dns/node-resolver-kb42x" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.097494 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.097730 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh6dl\" (UniqueName: \"kubernetes.io/projected/6c4091e4-3a55-4913-81f3-026a1a97c57c-kube-api-access-wh6dl\") pod \"machine-config-daemon-brq22\" (UID: \"6c4091e4-3a55-4913-81f3-026a1a97c57c\") " pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.098380 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbfw\" (UniqueName: \"kubernetes.io/projected/0e63af1c-1b83-44b6-9872-2dfefa37d433-kube-api-access-gwbfw\") pod \"ovnkube-node-rjlcm\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.099273 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gz5x\" (UniqueName: \"kubernetes.io/projected/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-kube-api-access-2gz5x\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.099485 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6plnv\" (UniqueName: \"kubernetes.io/projected/7844e4a2-e296-46c1-b047-ace0be3d95bb-kube-api-access-6plnv\") pod \"node-ca-8n99q\" (UID: \"7844e4a2-e296-46c1-b047-ace0be3d95bb\") " pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.102229 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvjv\" (UniqueName: \"kubernetes.io/projected/cd164967-b99b-47d0-a691-7d8118fa81ce-kube-api-access-7mvjv\") pod \"multus-k4m7f\" (UID: \"cd164967-b99b-47d0-a691-7d8118fa81ce\") " pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.102383 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2vkx\" (UniqueName: \"kubernetes.io/projected/6b8a404d-b143-4bf3-b590-c1b482f38f6f-kube-api-access-m2vkx\") pod \"multus-additional-cni-plugins-zj7fc\" (UID: \"6b8a404d-b143-4bf3-b590-c1b482f38f6f\") " pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.102653 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljj4q\" (UniqueName: \"kubernetes.io/projected/a273d93c-239a-444c-83cf-2c4ce34fa47b-kube-api-access-ljj4q\") pod \"ovnkube-control-plane-749d76644c-bts94\" (UID: \"a273d93c-239a-444c-83cf-2c4ce34fa47b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.112830 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.124400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.124512 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.124523 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.124538 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.124548 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.216200 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.226994 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.227032 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.227043 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.227095 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.227109 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.227554 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.230495 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 28 13:18:03 crc kubenswrapper[4897]: set -o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: source /etc/kubernetes/apiserver-url.env Feb 28 13:18:03 crc kubenswrapper[4897]: else Feb 28 13:18:03 crc kubenswrapper[4897]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 28 13:18:03 crc kubenswrapper[4897]: exit 1 Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.231678 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 28 13:18:03 crc kubenswrapper[4897]: W0228 13:18:03.242279 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-07ef1fc2af2356e89dd96f883efbe51aba4961240ae4f9f0074e119c134e9875 WatchSource:0}: Error finding container 07ef1fc2af2356e89dd96f883efbe51aba4961240ae4f9f0074e119c134e9875: Status 404 returned error can't find the container with id 07ef1fc2af2356e89dd96f883efbe51aba4961240ae4f9f0074e119c134e9875 Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.247543 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -f "/env/_master" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: set -o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: source "/env/_master" Feb 28 13:18:03 crc kubenswrapper[4897]: set +o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 28 13:18:03 crc kubenswrapper[4897]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 28 13:18:03 crc kubenswrapper[4897]: ho_enable="--enable-hybrid-overlay" Feb 28 13:18:03 crc kubenswrapper[4897]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 28 13:18:03 crc kubenswrapper[4897]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 28 13:18:03 crc kubenswrapper[4897]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --webhook-host=127.0.0.1 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --webhook-port=9743 \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ho_enable} \ Feb 28 13:18:03 crc kubenswrapper[4897]: --enable-interconnect \ Feb 28 13:18:03 crc kubenswrapper[4897]: --disable-approver \ Feb 28 13:18:03 crc kubenswrapper[4897]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --wait-for-kubernetes-api=200s \ Feb 28 13:18:03 crc kubenswrapper[4897]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --loglevel="${LOGLEVEL}" Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.247712 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.256923 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -f "/env/_master" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: set -o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: source "/env/_master" Feb 28 13:18:03 crc kubenswrapper[4897]: set +o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --disable-webhook \ Feb 28 13:18:03 crc kubenswrapper[4897]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --loglevel="${LOGLEVEL}" Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.258508 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 28 13:18:03 crc kubenswrapper[4897]: W0228 13:18:03.264678 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9b6a866dfa8e0e69f8e919748050569e4cc9bf1ace9416ac9962448cd0edc4b0 WatchSource:0}: Error finding container 9b6a866dfa8e0e69f8e919748050569e4cc9bf1ace9416ac9962448cd0edc4b0: Status 404 returned error can't find the container with id 9b6a866dfa8e0e69f8e919748050569e4cc9bf1ace9416ac9962448cd0edc4b0 Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.267777 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.268787 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-k4m7f" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.269207 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 28 13:18:03 crc kubenswrapper[4897]: W0228 13:18:03.281857 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd164967_b99b_47d0_a691_7d8118fa81ce.slice/crio-ddcc8eae031feea974092d8a9cfe7c37896ffa5f3b06ec1b96c9742defc851e7 WatchSource:0}: Error finding container ddcc8eae031feea974092d8a9cfe7c37896ffa5f3b06ec1b96c9742defc851e7: Status 404 returned error can't find the container with id ddcc8eae031feea974092d8a9cfe7c37896ffa5f3b06ec1b96c9742defc851e7 Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.284627 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 28 13:18:03 crc kubenswrapper[4897]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 28 13:18:03 crc kubenswrapper[4897]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mvjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-k4m7f_openshift-multus(cd164967-b99b-47d0-a691-7d8118fa81ce): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.285814 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-k4m7f" podUID="cd164967-b99b-47d0-a691-7d8118fa81ce" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.318530 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8n99q" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.329826 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.329887 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.329904 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.329928 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.329948 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.334109 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 28 13:18:03 crc kubenswrapper[4897]: while [ true ]; Feb 28 13:18:03 crc kubenswrapper[4897]: do Feb 28 13:18:03 crc kubenswrapper[4897]: for f in $(ls /tmp/serviceca); do Feb 28 13:18:03 crc kubenswrapper[4897]: echo $f Feb 28 13:18:03 crc kubenswrapper[4897]: ca_file_path="/tmp/serviceca/${f}" Feb 28 13:18:03 crc kubenswrapper[4897]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 28 13:18:03 crc kubenswrapper[4897]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 28 13:18:03 crc kubenswrapper[4897]: if [ -e "${reg_dir_path}" ]; then Feb 28 13:18:03 crc kubenswrapper[4897]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 28 13:18:03 crc kubenswrapper[4897]: else Feb 28 13:18:03 crc kubenswrapper[4897]: mkdir $reg_dir_path Feb 28 13:18:03 crc kubenswrapper[4897]: cp $ca_file_path $reg_dir_path/ca.crt Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: for d in $(ls /etc/docker/certs.d); do Feb 28 13:18:03 crc kubenswrapper[4897]: echo $d Feb 28 13:18:03 crc kubenswrapper[4897]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 28 13:18:03 crc kubenswrapper[4897]: reg_conf_path="/tmp/serviceca/${dp}" Feb 28 13:18:03 crc kubenswrapper[4897]: if [ ! -e "${reg_conf_path}" ]; then Feb 28 13:18:03 crc kubenswrapper[4897]: rm -rf /etc/docker/certs.d/$d Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 60 & wait ${!} Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6plnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-8n99q_openshift-image-registry(7844e4a2-e296-46c1-b047-ace0be3d95bb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.334713 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.335271 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-8n99q" podUID="7844e4a2-e296-46c1-b047-ace0be3d95bb" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.344248 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kb42x" Feb 28 13:18:03 crc kubenswrapper[4897]: W0228 13:18:03.350130 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e63af1c_1b83_44b6_9872_2dfefa37d433.slice/crio-b069924cc749e31828179ef715e6bebc810118832df5c22416be266834d1b77c WatchSource:0}: Error finding container b069924cc749e31828179ef715e6bebc810118832df5c22416be266834d1b77c: Status 404 returned error can't find the container with id b069924cc749e31828179ef715e6bebc810118832df5c22416be266834d1b77c Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.352157 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 28 13:18:03 crc kubenswrapper[4897]: apiVersion: v1 Feb 28 13:18:03 crc kubenswrapper[4897]: clusters: Feb 28 13:18:03 crc kubenswrapper[4897]: - cluster: Feb 28 13:18:03 crc kubenswrapper[4897]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 28 13:18:03 crc kubenswrapper[4897]: server: https://api-int.crc.testing:6443 Feb 28 13:18:03 crc kubenswrapper[4897]: name: default-cluster Feb 28 13:18:03 crc kubenswrapper[4897]: contexts: Feb 28 13:18:03 crc kubenswrapper[4897]: - context: Feb 28 13:18:03 crc kubenswrapper[4897]: cluster: default-cluster Feb 28 13:18:03 crc kubenswrapper[4897]: namespace: default Feb 28 13:18:03 crc kubenswrapper[4897]: user: default-auth Feb 28 13:18:03 crc kubenswrapper[4897]: name: default-context Feb 28 13:18:03 crc kubenswrapper[4897]: current-context: default-context Feb 28 13:18:03 crc kubenswrapper[4897]: kind: Config Feb 28 13:18:03 crc kubenswrapper[4897]: preferences: {} Feb 28 13:18:03 crc kubenswrapper[4897]: users: Feb 28 13:18:03 crc kubenswrapper[4897]: - name: default-auth Feb 28 13:18:03 crc kubenswrapper[4897]: user: Feb 28 13:18:03 crc kubenswrapper[4897]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 28 13:18:03 crc kubenswrapper[4897]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 28 13:18:03 crc kubenswrapper[4897]: EOF Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwbfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.353657 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.353718 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:18:03 crc kubenswrapper[4897]: W0228 13:18:03.361633 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ce402ca_1bea_4568_85cd_fb4a726f3c92.slice/crio-f0d301d8ff5c21df970b5c73735a95f7432ba928685df57b9eb6bb6b598e2acb WatchSource:0}: Error finding container f0d301d8ff5c21df970b5c73735a95f7432ba928685df57b9eb6bb6b598e2acb: Status 404 returned error can't find the container with id f0d301d8ff5c21df970b5c73735a95f7432ba928685df57b9eb6bb6b598e2acb Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.363005 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" Feb 28 13:18:03 crc kubenswrapper[4897]: W0228 13:18:03.365033 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8a404d_b143_4bf3_b590_c1b482f38f6f.slice/crio-2887d8cac7dd08900a17a36dc71a3c100a1cd4c2bd320fd85c994a0559e9b0ee WatchSource:0}: Error finding container 2887d8cac7dd08900a17a36dc71a3c100a1cd4c2bd320fd85c994a0559e9b0ee: Status 404 returned error can't find the container with id 2887d8cac7dd08900a17a36dc71a3c100a1cd4c2bd320fd85c994a0559e9b0ee Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.366974 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 28 13:18:03 crc kubenswrapper[4897]: set -uo pipefail Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 28 13:18:03 crc kubenswrapper[4897]: HOSTS_FILE="/etc/hosts" Feb 28 13:18:03 crc kubenswrapper[4897]: TEMP_FILE="/etc/hosts.tmp" Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # Make a temporary file with the old hosts file's attributes. Feb 28 13:18:03 crc kubenswrapper[4897]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 28 13:18:03 crc kubenswrapper[4897]: echo "Failed to preserve hosts file. Exiting." Feb 28 13:18:03 crc kubenswrapper[4897]: exit 1 Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: while true; do Feb 28 13:18:03 crc kubenswrapper[4897]: declare -A svc_ips Feb 28 13:18:03 crc kubenswrapper[4897]: for svc in "${services[@]}"; do Feb 28 13:18:03 crc kubenswrapper[4897]: # Fetch service IP from cluster dns if present. We make several tries Feb 28 13:18:03 crc kubenswrapper[4897]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 28 13:18:03 crc kubenswrapper[4897]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 28 13:18:03 crc kubenswrapper[4897]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 28 13:18:03 crc kubenswrapper[4897]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 28 13:18:03 crc kubenswrapper[4897]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 28 13:18:03 crc kubenswrapper[4897]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 28 13:18:03 crc kubenswrapper[4897]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 28 13:18:03 crc kubenswrapper[4897]: for i in ${!cmds[*]} Feb 28 13:18:03 crc kubenswrapper[4897]: do Feb 28 13:18:03 crc kubenswrapper[4897]: ips=($(eval "${cmds[i]}")) Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: svc_ips["${svc}"]="${ips[@]}" Feb 28 13:18:03 crc kubenswrapper[4897]: break Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # Update /etc/hosts only if we get valid service IPs Feb 28 13:18:03 crc kubenswrapper[4897]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 28 13:18:03 crc kubenswrapper[4897]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 28 13:18:03 crc kubenswrapper[4897]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 28 13:18:03 crc kubenswrapper[4897]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 60 & wait Feb 28 13:18:03 crc kubenswrapper[4897]: continue Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # Append resolver entries for services Feb 28 13:18:03 crc kubenswrapper[4897]: rc=0 Feb 28 13:18:03 crc kubenswrapper[4897]: for svc in "${!svc_ips[@]}"; do Feb 28 13:18:03 crc kubenswrapper[4897]: for ip in ${svc_ips[${svc}]}; do Feb 28 13:18:03 crc kubenswrapper[4897]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ $rc -ne 0 ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 60 & wait Feb 28 13:18:03 crc kubenswrapper[4897]: continue Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 28 13:18:03 crc kubenswrapper[4897]: # Replace /etc/hosts with our modified version if needed Feb 28 13:18:03 crc kubenswrapper[4897]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 28 13:18:03 crc kubenswrapper[4897]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 60 & wait Feb 28 13:18:03 crc kubenswrapper[4897]: unset svc_ips Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fs4zg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-kb42x_openshift-dns(3ce402ca-1bea-4568-85cd-fb4a726f3c92): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.368155 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-kb42x" podUID="3ce402ca-1bea-4568-85cd-fb4a726f3c92" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.368416 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-zj7fc_openshift-multus(6b8a404d-b143-4bf3-b590-c1b482f38f6f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.369665 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" podUID="6b8a404d-b143-4bf3-b590-c1b482f38f6f" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.369936 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.375898 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 28 13:18:03 crc kubenswrapper[4897]: set -euo pipefail Feb 28 13:18:03 crc kubenswrapper[4897]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 28 13:18:03 crc kubenswrapper[4897]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 28 13:18:03 crc kubenswrapper[4897]: # As the secret mount is optional we must wait for the files to be present. Feb 28 13:18:03 crc kubenswrapper[4897]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 28 13:18:03 crc kubenswrapper[4897]: TS=$(date +%s) Feb 28 13:18:03 crc kubenswrapper[4897]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 28 13:18:03 crc kubenswrapper[4897]: HAS_LOGGED_INFO=0 Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: log_missing_certs(){ Feb 28 13:18:03 crc kubenswrapper[4897]: CUR_TS=$(date +%s) Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 28 13:18:03 crc kubenswrapper[4897]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 28 13:18:03 crc kubenswrapper[4897]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 28 13:18:03 crc kubenswrapper[4897]: HAS_LOGGED_INFO=1 Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: } Feb 28 13:18:03 crc kubenswrapper[4897]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 28 13:18:03 crc kubenswrapper[4897]: log_missing_certs Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 5 Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/kube-rbac-proxy \ Feb 28 13:18:03 crc kubenswrapper[4897]: --logtostderr \ Feb 28 13:18:03 crc kubenswrapper[4897]: --secure-listen-address=:9108 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --upstream=http://127.0.0.1:29108/ \ Feb 28 13:18:03 crc kubenswrapper[4897]: --tls-private-key-file=${TLS_PK} \ Feb 28 13:18:03 crc kubenswrapper[4897]: --tls-cert-file=${TLS_CERT} Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljj4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-bts94_openshift-ovn-kubernetes(a273d93c-239a-444c-83cf-2c4ce34fa47b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.378820 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -f "/env/_master" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: set -o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: source "/env/_master" Feb 28 13:18:03 crc kubenswrapper[4897]: set +o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v4_join_subnet_opt= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "" != "" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v6_join_subnet_opt= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "" != "" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v4_transit_switch_subnet_opt= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "" != "" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v6_transit_switch_subnet_opt= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "" != "" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: dns_name_resolver_enabled_flag= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "false" == "true" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: persistent_ips_enabled_flag= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "true" == "true" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # This is needed so that converting clusters from GA to TP Feb 28 13:18:03 crc kubenswrapper[4897]: # will rollout control plane pods as well Feb 28 13:18:03 crc kubenswrapper[4897]: network_segmentation_enabled_flag= Feb 28 13:18:03 crc kubenswrapper[4897]: multi_network_enabled_flag= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "true" == "true" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: multi_network_enabled_flag="--enable-multi-network" Feb 28 13:18:03 crc kubenswrapper[4897]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/ovnkube \ Feb 28 13:18:03 crc kubenswrapper[4897]: --enable-interconnect \ Feb 28 13:18:03 crc kubenswrapper[4897]: --init-cluster-manager "${K8S_NODE}" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 28 13:18:03 crc kubenswrapper[4897]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --metrics-bind-address "127.0.0.1:29108" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --metrics-enable-pprof \ Feb 28 13:18:03 crc kubenswrapper[4897]: --metrics-enable-config-duration \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ovn_v4_join_subnet_opt} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ovn_v6_join_subnet_opt} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${dns_name_resolver_enabled_flag} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${persistent_ips_enabled_flag} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${multi_network_enabled_flag} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${network_segmentation_enabled_flag} Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljj4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-bts94_openshift-ovn-kubernetes(a273d93c-239a-444c-83cf-2c4ce34fa47b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.380244 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" podUID="a273d93c-239a-444c-83cf-2c4ce34fa47b" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.381292 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.383414 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.384605 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.433454 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.433486 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.433494 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.433507 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.433515 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.485198 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.485291 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.485336 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.485360 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485445 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:18:04.485413536 +0000 UTC m=+98.727734263 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485464 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485479 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485488 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485537 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:04.485524779 +0000 UTC m=+98.727845436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485533 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485645 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:04.485618042 +0000 UTC m=+98.727938739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485539 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.485729 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:04.485712975 +0000 UTC m=+98.728033732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.536287 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.536341 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.536354 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.536372 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.536385 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.585811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.585857 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.586016 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.586078 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs podName:8b95b3e0-28e1-4b26-86a3-bd61c5528b3e nodeName:}" failed. No retries permitted until 2026-02-28 13:18:04.586061785 +0000 UTC m=+98.828382462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs") pod "network-metrics-daemon-5tms6" (UID: "8b95b3e0-28e1-4b26-86a3-bd61c5528b3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.586076 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.586120 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.586139 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.586213 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:04.586189168 +0000 UTC m=+98.828509855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.639438 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.639490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.639507 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.639530 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.639548 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.742667 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.743062 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.743230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.743407 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.743544 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.846182 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.846227 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.846240 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.846258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.846271 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.903021 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k4m7f" event={"ID":"cd164967-b99b-47d0-a691-7d8118fa81ce","Type":"ContainerStarted","Data":"ddcc8eae031feea974092d8a9cfe7c37896ffa5f3b06ec1b96c9742defc851e7"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.904879 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9b6a866dfa8e0e69f8e919748050569e4cc9bf1ace9416ac9962448cd0edc4b0"} Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.905349 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 28 13:18:03 crc kubenswrapper[4897]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 28 13:18:03 crc kubenswrapper[4897]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mvjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-k4m7f_openshift-multus(cd164967-b99b-47d0-a691-7d8118fa81ce): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.906419 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.906458 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-k4m7f" podUID="cd164967-b99b-47d0-a691-7d8118fa81ce" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.907108 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8n99q" event={"ID":"7844e4a2-e296-46c1-b047-ace0be3d95bb","Type":"ContainerStarted","Data":"15e275c98f362e5efddeba116b4e7c0f190e77630baacb357cab2d3ed1a4f4ec"} Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.907840 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.909032 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 28 13:18:03 crc kubenswrapper[4897]: while [ true ]; Feb 28 13:18:03 crc kubenswrapper[4897]: do Feb 28 13:18:03 crc kubenswrapper[4897]: for f in $(ls /tmp/serviceca); do Feb 28 13:18:03 crc kubenswrapper[4897]: echo $f Feb 28 13:18:03 crc kubenswrapper[4897]: ca_file_path="/tmp/serviceca/${f}" Feb 28 13:18:03 crc kubenswrapper[4897]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 28 13:18:03 crc kubenswrapper[4897]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 28 13:18:03 crc kubenswrapper[4897]: if [ -e "${reg_dir_path}" ]; then Feb 28 13:18:03 crc kubenswrapper[4897]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 28 13:18:03 crc kubenswrapper[4897]: else Feb 28 13:18:03 crc kubenswrapper[4897]: mkdir $reg_dir_path Feb 28 13:18:03 crc kubenswrapper[4897]: cp $ca_file_path $reg_dir_path/ca.crt Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: for d in $(ls /etc/docker/certs.d); do Feb 28 13:18:03 crc kubenswrapper[4897]: echo $d Feb 28 13:18:03 crc kubenswrapper[4897]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 28 13:18:03 crc kubenswrapper[4897]: reg_conf_path="/tmp/serviceca/${dp}" Feb 28 13:18:03 crc kubenswrapper[4897]: if [ ! -e "${reg_conf_path}" ]; then Feb 28 13:18:03 crc kubenswrapper[4897]: rm -rf /etc/docker/certs.d/$d Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 60 & wait ${!} Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6plnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-8n99q_openshift-image-registry(7844e4a2-e296-46c1-b047-ace0be3d95bb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.909377 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"07ef1fc2af2356e89dd96f883efbe51aba4961240ae4f9f0074e119c134e9875"} Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.910524 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-8n99q" podUID="7844e4a2-e296-46c1-b047-ace0be3d95bb" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.919097 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.927242 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -f "/env/_master" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: set -o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: source "/env/_master" Feb 28 13:18:03 crc kubenswrapper[4897]: set +o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 28 13:18:03 crc kubenswrapper[4897]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 28 13:18:03 crc kubenswrapper[4897]: ho_enable="--enable-hybrid-overlay" Feb 28 13:18:03 crc kubenswrapper[4897]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 28 13:18:03 crc kubenswrapper[4897]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 28 13:18:03 crc kubenswrapper[4897]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --webhook-host=127.0.0.1 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --webhook-port=9743 \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ho_enable} \ Feb 28 13:18:03 crc kubenswrapper[4897]: --enable-interconnect \ Feb 28 13:18:03 crc kubenswrapper[4897]: --disable-approver \ Feb 28 13:18:03 crc kubenswrapper[4897]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --wait-for-kubernetes-api=200s \ Feb 28 13:18:03 crc kubenswrapper[4897]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --loglevel="${LOGLEVEL}" Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.927551 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"b069924cc749e31828179ef715e6bebc810118832df5c22416be266834d1b77c"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.929463 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kb42x" event={"ID":"3ce402ca-1bea-4568-85cd-fb4a726f3c92","Type":"ContainerStarted","Data":"f0d301d8ff5c21df970b5c73735a95f7432ba928685df57b9eb6bb6b598e2acb"} Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.932520 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 28 13:18:03 crc kubenswrapper[4897]: apiVersion: v1 Feb 28 13:18:03 crc kubenswrapper[4897]: clusters: Feb 28 13:18:03 crc kubenswrapper[4897]: - cluster: Feb 28 13:18:03 crc kubenswrapper[4897]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 28 13:18:03 crc kubenswrapper[4897]: server: https://api-int.crc.testing:6443 Feb 28 13:18:03 crc kubenswrapper[4897]: name: default-cluster Feb 28 13:18:03 crc kubenswrapper[4897]: contexts: Feb 28 13:18:03 crc kubenswrapper[4897]: - context: Feb 28 13:18:03 crc kubenswrapper[4897]: cluster: default-cluster Feb 28 13:18:03 crc kubenswrapper[4897]: namespace: default Feb 28 13:18:03 crc kubenswrapper[4897]: user: default-auth Feb 28 13:18:03 crc kubenswrapper[4897]: name: default-context Feb 28 13:18:03 crc kubenswrapper[4897]: current-context: default-context Feb 28 13:18:03 crc kubenswrapper[4897]: kind: Config Feb 28 13:18:03 crc kubenswrapper[4897]: preferences: {} Feb 28 13:18:03 crc kubenswrapper[4897]: users: Feb 28 13:18:03 crc kubenswrapper[4897]: - name: default-auth Feb 28 13:18:03 crc kubenswrapper[4897]: user: Feb 28 13:18:03 crc kubenswrapper[4897]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 28 13:18:03 crc kubenswrapper[4897]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 28 13:18:03 crc kubenswrapper[4897]: EOF Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwbfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.932578 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -f "/env/_master" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: set -o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: source "/env/_master" Feb 28 13:18:03 crc kubenswrapper[4897]: set +o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --disable-webhook \ Feb 28 13:18:03 crc kubenswrapper[4897]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --loglevel="${LOGLEVEL}" Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.932631 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 28 13:18:03 crc kubenswrapper[4897]: set -uo pipefail Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 28 13:18:03 crc kubenswrapper[4897]: HOSTS_FILE="/etc/hosts" Feb 28 13:18:03 crc kubenswrapper[4897]: TEMP_FILE="/etc/hosts.tmp" Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # Make a temporary file with the old hosts file's attributes. Feb 28 13:18:03 crc kubenswrapper[4897]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 28 13:18:03 crc kubenswrapper[4897]: echo "Failed to preserve hosts file. Exiting." Feb 28 13:18:03 crc kubenswrapper[4897]: exit 1 Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: while true; do Feb 28 13:18:03 crc kubenswrapper[4897]: declare -A svc_ips Feb 28 13:18:03 crc kubenswrapper[4897]: for svc in "${services[@]}"; do Feb 28 13:18:03 crc kubenswrapper[4897]: # Fetch service IP from cluster dns if present. We make several tries Feb 28 13:18:03 crc kubenswrapper[4897]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 28 13:18:03 crc kubenswrapper[4897]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 28 13:18:03 crc kubenswrapper[4897]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 28 13:18:03 crc kubenswrapper[4897]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 28 13:18:03 crc kubenswrapper[4897]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 28 13:18:03 crc kubenswrapper[4897]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 28 13:18:03 crc kubenswrapper[4897]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 28 13:18:03 crc kubenswrapper[4897]: for i in ${!cmds[*]} Feb 28 13:18:03 crc kubenswrapper[4897]: do Feb 28 13:18:03 crc kubenswrapper[4897]: ips=($(eval "${cmds[i]}")) Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: svc_ips["${svc}"]="${ips[@]}" Feb 28 13:18:03 crc kubenswrapper[4897]: break Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # Update /etc/hosts only if we get valid service IPs Feb 28 13:18:03 crc kubenswrapper[4897]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 28 13:18:03 crc kubenswrapper[4897]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 28 13:18:03 crc kubenswrapper[4897]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 28 13:18:03 crc kubenswrapper[4897]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 60 & wait Feb 28 13:18:03 crc kubenswrapper[4897]: continue Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # Append resolver entries for services Feb 28 13:18:03 crc kubenswrapper[4897]: rc=0 Feb 28 13:18:03 crc kubenswrapper[4897]: for svc in "${!svc_ips[@]}"; do Feb 28 13:18:03 crc kubenswrapper[4897]: for ip in ${svc_ips[${svc}]}; do Feb 28 13:18:03 crc kubenswrapper[4897]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ $rc -ne 0 ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 60 & wait Feb 28 13:18:03 crc kubenswrapper[4897]: continue Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 28 13:18:03 crc kubenswrapper[4897]: # Replace /etc/hosts with our modified version if needed Feb 28 13:18:03 crc kubenswrapper[4897]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 28 13:18:03 crc kubenswrapper[4897]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 60 & wait Feb 28 13:18:03 crc kubenswrapper[4897]: unset svc_ips Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fs4zg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-kb42x_openshift-dns(3ce402ca-1bea-4568-85cd-fb4a726f3c92): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.932857 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"8bd5964d3213dcd2a12e93612de343dc20c6e55932b78c90a7ccdca0b1bc00b1"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.932863 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.935824 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.938038 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.944337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" event={"ID":"a273d93c-239a-444c-83cf-2c4ce34fa47b","Type":"ContainerStarted","Data":"d12ce8f08701b9020ec7fc58dbe322499972720ff38c8d2472bd79ad8da6d609"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.944378 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d5dd51e377900c23efcbaace171c237d5bcf101c82f70b71801a8a31ac3109f9"} Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.944399 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" event={"ID":"6b8a404d-b143-4bf3-b590-c1b482f38f6f","Type":"ContainerStarted","Data":"2887d8cac7dd08900a17a36dc71a3c100a1cd4c2bd320fd85c994a0559e9b0ee"} Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.945389 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-kb42x" podUID="3ce402ca-1bea-4568-85cd-fb4a726f3c92" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.947980 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-zj7fc_openshift-multus(6b8a404d-b143-4bf3-b590-c1b482f38f6f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.948280 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 28 13:18:03 crc kubenswrapper[4897]: set -euo pipefail Feb 28 13:18:03 crc kubenswrapper[4897]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 28 13:18:03 crc kubenswrapper[4897]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 28 13:18:03 crc kubenswrapper[4897]: # As the secret mount is optional we must wait for the files to be present. Feb 28 13:18:03 crc kubenswrapper[4897]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 28 13:18:03 crc kubenswrapper[4897]: TS=$(date +%s) Feb 28 13:18:03 crc kubenswrapper[4897]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 28 13:18:03 crc kubenswrapper[4897]: HAS_LOGGED_INFO=0 Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: log_missing_certs(){ Feb 28 13:18:03 crc kubenswrapper[4897]: CUR_TS=$(date +%s) Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 28 13:18:03 crc kubenswrapper[4897]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 28 13:18:03 crc kubenswrapper[4897]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 28 13:18:03 crc kubenswrapper[4897]: HAS_LOGGED_INFO=1 Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: } Feb 28 13:18:03 crc kubenswrapper[4897]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 28 13:18:03 crc kubenswrapper[4897]: log_missing_certs Feb 28 13:18:03 crc kubenswrapper[4897]: sleep 5 Feb 28 13:18:03 crc kubenswrapper[4897]: done Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/kube-rbac-proxy \ Feb 28 13:18:03 crc kubenswrapper[4897]: --logtostderr \ Feb 28 13:18:03 crc kubenswrapper[4897]: --secure-listen-address=:9108 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 28 13:18:03 crc kubenswrapper[4897]: --upstream=http://127.0.0.1:29108/ \ Feb 28 13:18:03 crc kubenswrapper[4897]: --tls-private-key-file=${TLS_PK} \ Feb 28 13:18:03 crc kubenswrapper[4897]: --tls-cert-file=${TLS_CERT} Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljj4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-bts94_openshift-ovn-kubernetes(a273d93c-239a-444c-83cf-2c4ce34fa47b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.948507 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.949623 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.949869 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.949896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.949914 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:03Z","lastTransitionTime":"2026-02-28T13:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.948442 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.950522 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" podUID="6b8a404d-b143-4bf3-b590-c1b482f38f6f" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.950761 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 28 13:18:03 crc kubenswrapper[4897]: set -o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: source /etc/kubernetes/apiserver-url.env Feb 28 13:18:03 crc kubenswrapper[4897]: else Feb 28 13:18:03 crc kubenswrapper[4897]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 28 13:18:03 crc kubenswrapper[4897]: exit 1 Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.951723 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:18:03 crc kubenswrapper[4897]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ -f "/env/_master" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: set -o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: source "/env/_master" Feb 28 13:18:03 crc kubenswrapper[4897]: set +o allexport Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v4_join_subnet_opt= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "" != "" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v6_join_subnet_opt= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "" != "" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v4_transit_switch_subnet_opt= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "" != "" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v6_transit_switch_subnet_opt= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "" != "" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: dns_name_resolver_enabled_flag= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "false" == "true" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: persistent_ips_enabled_flag= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "true" == "true" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: # This is needed so that converting clusters from GA to TP Feb 28 13:18:03 crc kubenswrapper[4897]: # will rollout control plane pods as well Feb 28 13:18:03 crc kubenswrapper[4897]: network_segmentation_enabled_flag= Feb 28 13:18:03 crc kubenswrapper[4897]: multi_network_enabled_flag= Feb 28 13:18:03 crc kubenswrapper[4897]: if [[ "true" == "true" ]]; then Feb 28 13:18:03 crc kubenswrapper[4897]: multi_network_enabled_flag="--enable-multi-network" Feb 28 13:18:03 crc kubenswrapper[4897]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 28 13:18:03 crc kubenswrapper[4897]: fi Feb 28 13:18:03 crc kubenswrapper[4897]: Feb 28 13:18:03 crc kubenswrapper[4897]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 28 13:18:03 crc kubenswrapper[4897]: exec /usr/bin/ovnkube \ Feb 28 13:18:03 crc kubenswrapper[4897]: --enable-interconnect \ Feb 28 13:18:03 crc kubenswrapper[4897]: --init-cluster-manager "${K8S_NODE}" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 28 13:18:03 crc kubenswrapper[4897]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --metrics-bind-address "127.0.0.1:29108" \ Feb 28 13:18:03 crc kubenswrapper[4897]: --metrics-enable-pprof \ Feb 28 13:18:03 crc kubenswrapper[4897]: --metrics-enable-config-duration \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ovn_v4_join_subnet_opt} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ovn_v6_join_subnet_opt} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${dns_name_resolver_enabled_flag} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${persistent_ips_enabled_flag} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${multi_network_enabled_flag} \ Feb 28 13:18:03 crc kubenswrapper[4897]: ${network_segmentation_enabled_flag} Feb 28 13:18:03 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljj4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-bts94_openshift-ovn-kubernetes(a273d93c-239a-444c-83cf-2c4ce34fa47b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 13:18:03 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.951858 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.953939 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh6dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.954242 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" podUID="a273d93c-239a-444c-83cf-2c4ce34fa47b" Feb 28 13:18:03 crc kubenswrapper[4897]: E0228 13:18:03.955287 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.966976 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:03 crc kubenswrapper[4897]: I0228 13:18:03.985759 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.002761 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.018565 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.042355 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.052204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.052230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.052239 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.052253 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.052264 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.052409 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.061654 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.073197 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.086160 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.101847 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.114870 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.127862 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.143085 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.154076 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.155710 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.155746 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.155759 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.155778 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.155792 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.166374 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.180997 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.196783 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.217044 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.232201 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.246759 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.257484 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.258884 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.259068 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.259200 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.259503 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.259666 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.266130 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.276519 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.291934 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.306423 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.322819 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.362976 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.363053 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.363079 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.363110 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.363132 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.456365 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.456406 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.456405 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.456483 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.456534 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.456667 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.456729 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.456811 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.463630 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.464915 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.465452 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.465498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.465515 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.465536 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.465555 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.467397 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.468621 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.470569 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.471750 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.472920 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.474790 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.478845 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.479925 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.480697 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.482217 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.483023 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.484192 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.484896 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.486010 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.486722 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.487214 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.488378 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.489131 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.489717 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.490931 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.491482 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.492745 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.493248 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.494529 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.495297 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.496398 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.497109 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.497708 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.498070 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.498191 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498221 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:18:06.498192572 +0000 UTC m=+100.740513229 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.498342 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.498390 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498347 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498436 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498451 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498472 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498405 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498501 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:06.498484531 +0000 UTC m=+100.740805298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498717 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:06.498678887 +0000 UTC m=+100.740999584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.498790 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:06.4987739 +0000 UTC m=+100.741094587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.498853 4897 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.499063 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.502712 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.504432 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.506113 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.509456 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.511735 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.512969 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.515135 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.516609 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.518687 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.520131 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.521619 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.522516 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.523829 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.524924 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.526479 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.527955 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.529473 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.530279 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.531493 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.532291 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.533111 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.534892 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.568551 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.568604 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.568618 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.568640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.568657 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.599274 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.599392 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.599553 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.599592 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.599602 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.599612 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.599712 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:06.599677756 +0000 UTC m=+100.841998453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:04 crc kubenswrapper[4897]: E0228 13:18:04.599747 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs podName:8b95b3e0-28e1-4b26-86a3-bd61c5528b3e nodeName:}" failed. No retries permitted until 2026-02-28 13:18:06.599731358 +0000 UTC m=+100.842052045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs") pod "network-metrics-daemon-5tms6" (UID: "8b95b3e0-28e1-4b26-86a3-bd61c5528b3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.671954 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.672032 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.672057 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.672091 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.672114 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.775842 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.775912 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.775936 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.775965 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.775982 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.879529 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.879681 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.879704 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.879769 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.879802 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.983533 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.983606 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.983623 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.983650 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:04 crc kubenswrapper[4897]: I0228 13:18:04.983668 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:04Z","lastTransitionTime":"2026-02-28T13:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.086825 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.086899 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.086919 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.086950 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.086982 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.190213 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.190273 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.190293 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.190344 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.190366 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.294041 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.294090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.294109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.294133 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.294152 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.397245 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.397334 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.397349 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.397369 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.397386 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.499707 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.499766 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.499783 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.499810 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.499828 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.602411 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.602461 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.602474 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.602494 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.602510 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.636101 4897 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.705928 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.705985 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.706005 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.706030 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.706047 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.809204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.809258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.809281 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.809346 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.809388 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.912113 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.912165 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.912202 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.912229 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:05 crc kubenswrapper[4897]: I0228 13:18:05.912250 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:05Z","lastTransitionTime":"2026-02-28T13:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.015776 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.015830 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.015846 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.015875 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.015892 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.126390 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.126528 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.126549 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.126575 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.126594 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.230488 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.230595 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.230621 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.230653 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.230704 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.334618 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.334701 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.334723 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.334785 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.334806 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.437347 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.437419 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.437447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.437479 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.437501 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.456025 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.456118 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.456228 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.456229 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.456234 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.456852 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.456942 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.457130 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.474969 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.482009 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.495020 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.509809 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.524728 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.524875 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.524959 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.525003 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525115 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525193 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:10.525170626 +0000 UTC m=+104.767491323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525624 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525675 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525693 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525729 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:18:10.525687271 +0000 UTC m=+104.768007968 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525766 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:10.525752003 +0000 UTC m=+104.768072700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525842 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.525941 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:10.525914408 +0000 UTC m=+104.768235265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.527301 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.542207 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.542260 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.542278 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.542302 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.542359 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.544223 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.558024 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.574757 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.590527 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.604119 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.625820 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.625886 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.626035 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.626046 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.626060 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.626075 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.626123 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs podName:8b95b3e0-28e1-4b26-86a3-bd61c5528b3e nodeName:}" failed. No retries permitted until 2026-02-28 13:18:10.626099513 +0000 UTC m=+104.868420200 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs") pod "network-metrics-daemon-5tms6" (UID: "8b95b3e0-28e1-4b26-86a3-bd61c5528b3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:06 crc kubenswrapper[4897]: E0228 13:18:06.626151 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:10.626137204 +0000 UTC m=+104.868457901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.629980 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.645211 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.645377 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.645403 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.645498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.645634 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.647300 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.662236 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.675194 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.688379 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.748784 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.748853 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.748873 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.748899 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.748918 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.851400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.851471 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.851493 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.851523 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.851544 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.954896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.954977 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.955004 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.955087 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:06 crc kubenswrapper[4897]: I0228 13:18:06.955116 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:06Z","lastTransitionTime":"2026-02-28T13:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.059372 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.059430 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.059448 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.059474 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.059496 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.162616 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.162679 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.162701 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.162736 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.162772 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.265673 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.265721 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.265739 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.265763 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.265781 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.369596 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.369691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.369710 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.370247 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.370652 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.473408 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.473470 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.473489 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.473513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.473529 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.569502 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.569545 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.569562 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.569585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.569605 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: E0228 13:18:07.583704 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.588589 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.588631 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.588648 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.588675 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.588695 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: E0228 13:18:07.603742 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.608282 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.608370 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.608392 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.608416 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.608436 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: E0228 13:18:07.623479 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.627119 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.627163 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.627176 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.627195 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.627209 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: E0228 13:18:07.638220 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.642290 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.642343 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.642355 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.642375 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.642387 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: E0228 13:18:07.655848 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:07 crc kubenswrapper[4897]: E0228 13:18:07.656175 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.657874 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.657910 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.657923 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.657938 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.657950 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.761188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.761255 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.761273 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.761296 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.761389 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.863813 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.863879 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.863897 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.863927 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.863945 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.966216 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.966272 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.966290 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.966342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:07 crc kubenswrapper[4897]: I0228 13:18:07.966361 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:07Z","lastTransitionTime":"2026-02-28T13:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.068500 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.068536 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.068544 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.068557 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.068566 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.170773 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.170816 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.170833 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.170853 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.170869 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.273021 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.273425 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.273582 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.273773 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.273806 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.377368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.377459 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.377478 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.377534 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.377554 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.455602 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.455651 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.455626 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:08 crc kubenswrapper[4897]: E0228 13:18:08.455838 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.455897 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:08 crc kubenswrapper[4897]: E0228 13:18:08.456003 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:08 crc kubenswrapper[4897]: E0228 13:18:08.456094 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:08 crc kubenswrapper[4897]: E0228 13:18:08.456504 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.470564 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.472600 4897 scope.go:117] "RemoveContainer" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" Feb 28 13:18:08 crc kubenswrapper[4897]: E0228 13:18:08.472950 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.480780 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.481070 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.481246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.481468 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.481622 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.584405 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.584454 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.584471 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.584497 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.584513 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.686817 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.686862 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.686878 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.686897 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.686912 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.789624 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.789660 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.789673 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.789688 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.789700 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.891806 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.891850 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.891867 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.891887 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.891904 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.955071 4897 scope.go:117] "RemoveContainer" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" Feb 28 13:18:08 crc kubenswrapper[4897]: E0228 13:18:08.955260 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.994679 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.994741 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.994762 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.994789 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:08 crc kubenswrapper[4897]: I0228 13:18:08.994809 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:08Z","lastTransitionTime":"2026-02-28T13:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.097819 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.097871 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.097888 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.097908 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.097924 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.200408 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.200497 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.200512 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.200531 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.200579 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.303347 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.303404 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.303429 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.303453 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.303469 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.406824 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.406890 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.406907 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.406931 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.406949 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.510022 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.510097 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.510121 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.510152 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.510174 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.613473 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.613576 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.613624 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.613650 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.613669 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.716091 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.716207 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.716265 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.716300 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.716403 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.819898 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.819950 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.819966 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.819992 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.820008 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.923039 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.923080 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.923092 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.923110 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:09 crc kubenswrapper[4897]: I0228 13:18:09.923122 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:09Z","lastTransitionTime":"2026-02-28T13:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.026877 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.026946 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.026970 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.026999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.027021 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.130449 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.130554 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.130576 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.130607 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.130626 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.234177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.234237 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.234256 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.234280 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.234298 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.338461 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.338521 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.338546 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.338571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.338589 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.441433 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.441496 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.441514 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.441548 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.441568 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.455893 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.455987 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.455995 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.455907 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.456054 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.456137 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.456259 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.456379 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.546048 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.546142 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.546160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.546184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.546256 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.579846 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.580075 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580120 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:18:18.580059713 +0000 UTC m=+112.822380410 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580301 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580404 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580433 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580524 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:18.580493896 +0000 UTC m=+112.822814623 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.580387 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580674 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580796 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.580670 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580887 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:18.580822325 +0000 UTC m=+112.823143032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.580927 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:18.580909448 +0000 UTC m=+112.823230235 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.648909 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.648978 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.648997 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.649023 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.649042 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.682140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.682203 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.682419 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.682450 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.682494 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.682504 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs podName:8b95b3e0-28e1-4b26-86a3-bd61c5528b3e nodeName:}" failed. No retries permitted until 2026-02-28 13:18:18.682481714 +0000 UTC m=+112.924802401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs") pod "network-metrics-daemon-5tms6" (UID: "8b95b3e0-28e1-4b26-86a3-bd61c5528b3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.682515 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:10 crc kubenswrapper[4897]: E0228 13:18:10.682602 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:18.682577917 +0000 UTC m=+112.924898614 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.751738 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.751823 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.751845 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.751871 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.751895 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.856007 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.856097 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.856116 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.856141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.856189 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.959658 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.959713 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.959734 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.959758 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:10 crc kubenswrapper[4897]: I0228 13:18:10.959776 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:10Z","lastTransitionTime":"2026-02-28T13:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.063285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.063487 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.063527 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.063596 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.063613 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.167106 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.167172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.167184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.167219 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.167233 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.270368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.270467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.270490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.270543 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.270560 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.373956 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.374054 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.374072 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.374128 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.374144 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.476844 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.476913 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.476938 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.476967 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.476990 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.579984 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.580041 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.580083 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.580108 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.580126 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.683999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.684049 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.684065 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.684086 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.684104 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.788037 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.788091 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.788108 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.788131 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.788147 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.891011 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.891085 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.891111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.891141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.891164 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.994292 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.994390 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.994409 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.994432 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:11 crc kubenswrapper[4897]: I0228 13:18:11.994450 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:11Z","lastTransitionTime":"2026-02-28T13:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.096822 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.096879 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.096897 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.096922 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.096939 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.199791 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.199848 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.199864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.199890 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.199908 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.303418 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.303485 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.303502 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.303525 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.303541 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.406804 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.406860 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.406877 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.406904 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.406926 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.456401 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.456477 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.456426 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.456493 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:12 crc kubenswrapper[4897]: E0228 13:18:12.456674 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:12 crc kubenswrapper[4897]: E0228 13:18:12.456866 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:12 crc kubenswrapper[4897]: E0228 13:18:12.457132 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:12 crc kubenswrapper[4897]: E0228 13:18:12.457289 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.509691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.509785 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.509804 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.509827 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.509875 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.613508 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.613587 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.613614 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.613684 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.613706 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.716743 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.716795 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.716817 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.716846 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.716863 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.820014 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.820063 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.820083 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.820104 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.820120 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.923066 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.923124 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.923141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.923165 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:12 crc kubenswrapper[4897]: I0228 13:18:12.923182 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:12Z","lastTransitionTime":"2026-02-28T13:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.025724 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.025787 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.025804 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.025865 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.025887 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.128934 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.129006 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.129028 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.129055 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.129080 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.232526 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.232597 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.232616 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.232645 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.232663 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.335143 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.335192 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.335207 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.335227 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.335240 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.437948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.437992 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.438035 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.438054 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.438068 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.540674 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.540718 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.540733 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.540751 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.540765 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.642972 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.643030 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.643065 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.643094 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.643114 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.745567 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.745610 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.745623 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.745640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.745653 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.848873 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.848949 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.848969 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.848997 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.849021 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.952366 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.952416 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.952432 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.952454 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:13 crc kubenswrapper[4897]: I0228 13:18:13.952472 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:13Z","lastTransitionTime":"2026-02-28T13:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.055090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.055188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.055215 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.055281 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.055371 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.159124 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.159256 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.159281 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.159364 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.159391 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.262169 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.262231 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.262243 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.262277 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.262291 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.364710 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.364787 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.364819 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.364845 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.364863 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.455608 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.455680 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:14 crc kubenswrapper[4897]: E0228 13:18:14.455735 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:14 crc kubenswrapper[4897]: E0228 13:18:14.455818 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.455965 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:14 crc kubenswrapper[4897]: E0228 13:18:14.456013 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.456109 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:14 crc kubenswrapper[4897]: E0228 13:18:14.456178 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.467524 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.467570 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.467645 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.467669 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.467685 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.570987 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.571081 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.571109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.571137 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.571157 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.674963 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.675011 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.675022 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.675163 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.675179 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.778013 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.778059 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.778076 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.778099 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.778118 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.881355 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.881414 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.881431 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.881455 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.881473 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.984802 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.984872 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.984890 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.984917 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:14 crc kubenswrapper[4897]: I0228 13:18:14.984934 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:14Z","lastTransitionTime":"2026-02-28T13:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.087376 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.087440 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.087457 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.087482 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.087503 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.190639 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.190691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.190704 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.190780 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.190798 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.294231 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.294342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.294362 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.294387 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.294404 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.397385 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.397458 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.397480 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.397510 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.397531 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.501424 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.501506 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.501530 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.501567 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.501591 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.604251 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.604297 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.604346 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.604371 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.604392 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.706734 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.706814 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.706843 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.706867 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.706885 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.810470 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.810529 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.810546 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.810568 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.810585 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.913544 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.913597 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.913613 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.913636 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.913655 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:15Z","lastTransitionTime":"2026-02-28T13:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.977059 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500" exitCode=0 Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.977142 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.982811 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" event={"ID":"a273d93c-239a-444c-83cf-2c4ce34fa47b","Type":"ContainerStarted","Data":"80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.983285 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" event={"ID":"a273d93c-239a-444c-83cf-2c4ce34fa47b","Type":"ContainerStarted","Data":"f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8"} Feb 28 13:18:15 crc kubenswrapper[4897]: I0228 13:18:15.996024 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.005879 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.015224 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.016697 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.016935 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.017097 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.017356 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.017508 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.032742 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.046956 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.063371 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.073546 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.083284 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.099643 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.116241 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.121257 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.121305 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.121392 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.121418 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.121437 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.125748 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.135330 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.144415 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.158485 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.174431 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.185460 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.197431 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.208961 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.220729 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.231852 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.231898 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.231911 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.231930 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.231944 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.235267 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.250469 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.269867 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.280502 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.296007 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.313015 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.326473 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.334984 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.335042 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.335066 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.335095 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.335118 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.349021 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.374758 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.401098 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.418696 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.437907 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.437941 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.437953 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.437969 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.437981 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.441570 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.455435 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.455483 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.455446 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:16 crc kubenswrapper[4897]: E0228 13:18:16.455571 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.455446 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:16 crc kubenswrapper[4897]: E0228 13:18:16.455721 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:16 crc kubenswrapper[4897]: E0228 13:18:16.455881 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:16 crc kubenswrapper[4897]: E0228 13:18:16.455968 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.466008 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.474266 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.483148 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.491915 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.501681 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.511847 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.520588 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.530615 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.539947 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.539985 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.539997 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.540017 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.540028 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.546609 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.568536 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.584922 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.604941 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.625999 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.642543 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.642577 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.642590 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.642605 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.642615 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.645147 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.659114 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.666604 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.676111 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.744731 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.744784 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.744801 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.744825 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.744841 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.848196 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.848261 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.848285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.848320 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.848383 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.951350 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.951401 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.951423 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.951451 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.951469 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:16Z","lastTransitionTime":"2026-02-28T13:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.987717 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kb42x" event={"ID":"3ce402ca-1bea-4568-85cd-fb4a726f3c92","Type":"ContainerStarted","Data":"dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.993446 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.993503 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.993524 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.993542 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.993562 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} Feb 28 13:18:16 crc kubenswrapper[4897]: I0228 13:18:16.993579 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.007388 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.025527 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.034590 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.043441 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.054622 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.054690 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.054714 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.054740 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.054759 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.059173 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.086784 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.104962 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.121874 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.136925 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.149512 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.157947 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.158000 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.158024 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.158052 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.158070 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.166820 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.178245 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.189277 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.203143 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.221442 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.234733 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.260746 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.260793 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.260811 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.260832 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.260849 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.363111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.363233 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.363259 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.363383 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.363411 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.466583 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.466631 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.466648 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.466669 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.466686 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.569286 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.569365 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.569384 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.569407 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.569426 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.672668 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.673177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.673203 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.673235 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.673259 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.742200 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.742260 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.742276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.742351 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.742373 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: E0228 13:18:17.764012 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.768648 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.768716 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.768735 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.769177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.769243 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: E0228 13:18:17.781181 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.786778 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.786819 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.786836 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.786864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.786881 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: E0228 13:18:17.800695 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.806791 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.806847 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.806864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.806887 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.806906 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: E0228 13:18:17.819138 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.824012 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.824056 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.824071 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.824093 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.824111 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: E0228 13:18:17.835305 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:17 crc kubenswrapper[4897]: E0228 13:18:17.835552 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.837826 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.837864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.837880 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.837902 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.837921 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.940425 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.940475 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.940491 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.940513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:17 crc kubenswrapper[4897]: I0228 13:18:17.940531 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:17Z","lastTransitionTime":"2026-02-28T13:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.000340 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.000413 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.003051 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8n99q" event={"ID":"7844e4a2-e296-46c1-b047-ace0be3d95bb","Type":"ContainerStarted","Data":"374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.005762 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k4m7f" event={"ID":"cd164967-b99b-47d0-a691-7d8118fa81ce","Type":"ContainerStarted","Data":"02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.018051 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.034933 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.043662 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.043730 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.043750 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.043777 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.043801 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.048576 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.063161 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.079358 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.091943 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.105884 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.130888 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.146886 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.147191 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.147360 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.147539 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.147674 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.160843 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.179223 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.195897 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.211002 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.224882 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.242079 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.250354 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.250431 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.250459 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.250490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.250516 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.254621 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.266868 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.279645 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.295160 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.321153 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.351210 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.353137 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.353617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.353709 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.353739 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.353758 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.369277 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.382442 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.393225 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.406379 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.421571 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.433009 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.447153 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.455572 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.455899 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.455958 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.456004 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.456184 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.456299 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.456432 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.456537 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.458103 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.458173 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.458196 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.458224 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.458247 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.463940 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.488467 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.501884 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.518210 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.532985 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.561408 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.561457 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.561474 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.561497 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.561513 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.663014 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.663177 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.663294 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.663416 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.663580 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.663682 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:34.663656018 +0000 UTC m=+128.905976715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.663940 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:18:34.663915085 +0000 UTC m=+128.906235772 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.664100 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.664142 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.664166 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.664228 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:34.664209154 +0000 UTC m=+128.906529851 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.664377 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.664444 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:34.6644255 +0000 UTC m=+128.906746197 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.665727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.665789 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.665814 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.665843 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.665863 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.764650 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.764707 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.764863 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.764908 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.764935 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs podName:8b95b3e0-28e1-4b26-86a3-bd61c5528b3e nodeName:}" failed. No retries permitted until 2026-02-28 13:18:34.764916135 +0000 UTC m=+129.007236802 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs") pod "network-metrics-daemon-5tms6" (UID: "8b95b3e0-28e1-4b26-86a3-bd61c5528b3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.764950 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.764975 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:18 crc kubenswrapper[4897]: E0228 13:18:18.765049 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 13:18:34.765020728 +0000 UTC m=+129.007341425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.770039 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.770090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.770102 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.770120 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.770132 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.873852 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.873913 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.873933 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.873956 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.873977 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.976985 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.977031 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.977047 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.977069 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:18 crc kubenswrapper[4897]: I0228 13:18:18.977086 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:18Z","lastTransitionTime":"2026-02-28T13:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.080635 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.080678 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.080691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.080709 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.080723 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.184213 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.184271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.184288 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.184341 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.184361 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.287770 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.287819 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.287838 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.287866 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.287883 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.393135 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.393559 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.393577 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.393600 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.393618 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.496682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.496737 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.496756 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.496782 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.496799 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.599902 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.599966 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.599991 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.600021 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.600044 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.703021 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.703121 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.703145 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.703177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.703199 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.806583 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.806645 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.806663 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.806687 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.806703 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.909836 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.909885 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.909902 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.909924 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:19 crc kubenswrapper[4897]: I0228 13:18:19.909939 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:19Z","lastTransitionTime":"2026-02-28T13:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.012138 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.012188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.012205 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.012228 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.012247 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.014656 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.020394 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.022376 4897 generic.go:334] "Generic (PLEG): container finished" podID="6b8a404d-b143-4bf3-b590-c1b482f38f6f" containerID="512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32" exitCode=0 Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.022465 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" event={"ID":"6b8a404d-b143-4bf3-b590-c1b482f38f6f","Type":"ContainerDied","Data":"512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.025291 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.025385 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.035684 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.055606 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.071139 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.102564 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.114865 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.114905 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.114923 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.114944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.114961 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.123725 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.144591 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.164410 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.183562 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.208200 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.218202 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.218242 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.218254 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.218276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.218289 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.225643 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.238835 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.255381 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.273660 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.290896 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.312382 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.322301 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.322357 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.322369 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.322386 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.322400 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.329675 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.346585 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.367001 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.382681 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.401028 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.421634 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.425703 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.425762 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.425779 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.425810 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.425827 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.437559 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.455509 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:20 crc kubenswrapper[4897]: E0228 13:18:20.455701 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.455773 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.455831 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:20 crc kubenswrapper[4897]: E0228 13:18:20.455943 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:20 crc kubenswrapper[4897]: E0228 13:18:20.456058 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.456266 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:20 crc kubenswrapper[4897]: E0228 13:18:20.456470 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.471760 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.487441 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.500722 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.511162 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.520675 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.528897 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.528936 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.528945 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.528958 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.528967 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.529207 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.539515 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.546664 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.557201 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.570387 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:20Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.631690 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.631732 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.631744 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.631761 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.631771 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.734982 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.735047 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.735066 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.735091 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.735111 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.837505 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.837552 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.837565 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.837582 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.837593 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.940249 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.940280 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.940344 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.940361 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:20 crc kubenswrapper[4897]: I0228 13:18:20.940373 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:20Z","lastTransitionTime":"2026-02-28T13:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.031586 4897 generic.go:334] "Generic (PLEG): container finished" podID="6b8a404d-b143-4bf3-b590-c1b482f38f6f" containerID="498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5" exitCode=0 Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.031631 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" event={"ID":"6b8a404d-b143-4bf3-b590-c1b482f38f6f","Type":"ContainerDied","Data":"498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.043103 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.043141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.043150 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.043165 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.043177 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.067295 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.085393 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.097209 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.113570 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.126770 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.139648 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.146142 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.146172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.146181 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.146194 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.146203 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.156425 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.184593 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.213288 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.234059 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.246566 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.248421 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.248481 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.248499 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.248524 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.248554 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.260420 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.270988 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.284082 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.302346 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.313764 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:21Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.350652 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.350700 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.350734 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.350757 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.350770 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.453944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.454239 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.454260 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.454283 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.454299 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.455752 4897 scope.go:117] "RemoveContainer" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.556951 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.557005 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.557026 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.557049 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.557066 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.660350 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.660382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.660392 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.660411 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.660425 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.763001 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.763075 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.763094 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.763551 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.763649 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.866992 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.867043 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.867057 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.867078 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.867093 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.972052 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.972133 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.972154 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.972186 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:21 crc kubenswrapper[4897]: I0228 13:18:21.972206 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:21Z","lastTransitionTime":"2026-02-28T13:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.041900 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.042331 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.042394 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.045578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.049261 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.051720 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.052915 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.055720 4897 generic.go:334] "Generic (PLEG): container finished" podID="6b8a404d-b143-4bf3-b590-c1b482f38f6f" containerID="223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263" exitCode=0 Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.055759 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" event={"ID":"6b8a404d-b143-4bf3-b590-c1b482f38f6f","Type":"ContainerDied","Data":"223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.067474 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.078987 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.079044 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.079062 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.079085 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.079101 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.083454 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.084474 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.097515 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.125306 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.146721 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.170638 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.182340 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.182375 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.182385 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.182402 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.182413 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.188006 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.211283 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.244799 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.260436 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.277575 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.285681 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.285707 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.285716 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.285731 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.285741 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.289667 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.306165 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.322626 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.339663 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.351222 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.364921 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.379896 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.387631 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.387667 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.387679 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.387697 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.387708 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.392240 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.405655 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.418378 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.430628 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.440781 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.455903 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.456010 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:22 crc kubenswrapper[4897]: E0228 13:18:22.456134 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.456166 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:22 crc kubenswrapper[4897]: E0228 13:18:22.456294 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.456391 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:22 crc kubenswrapper[4897]: E0228 13:18:22.456487 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:22 crc kubenswrapper[4897]: E0228 13:18:22.456578 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.458115 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.489758 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.490073 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.490238 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.490430 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.490554 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.491610 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.506502 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.522761 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.539777 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.550860 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.568296 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.582478 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.594028 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.594070 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.594081 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.594099 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.594112 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.599746 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:22Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.696302 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.696373 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.696390 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.696414 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.696432 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.799896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.799976 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.800001 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.800035 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.800059 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.903105 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.903144 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.903153 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.903166 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:22 crc kubenswrapper[4897]: I0228 13:18:22.903175 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:22Z","lastTransitionTime":"2026-02-28T13:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.006020 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.006108 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.006134 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.006170 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.006198 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.062685 4897 generic.go:334] "Generic (PLEG): container finished" podID="6b8a404d-b143-4bf3-b590-c1b482f38f6f" containerID="d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f" exitCode=0 Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.063509 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" event={"ID":"6b8a404d-b143-4bf3-b590-c1b482f38f6f","Type":"ContainerDied","Data":"d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.064158 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.084600 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.099558 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.107709 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.109153 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.109240 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.109266 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.109288 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.109353 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.127822 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.151571 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.176175 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.192898 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.211594 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.211649 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.211665 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.211689 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.211705 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.212114 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.230530 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.249030 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.275522 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.293412 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.311612 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.314045 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.314107 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.314126 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.314151 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.314169 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.332845 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.360635 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.387645 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.411661 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.416208 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.416243 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.416255 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.416271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.416280 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.440694 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.456601 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.473862 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.495356 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.516281 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.517715 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.517747 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.517756 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.517768 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.517777 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.526905 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.539551 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.550328 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.558479 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.568581 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.582176 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.593466 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.604448 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.622081 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.622106 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.622114 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.622128 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.622136 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.632761 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.653993 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.666994 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:23Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.727408 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.727440 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.727450 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.727462 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.727470 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.830032 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.830066 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.830076 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.830092 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.830102 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.933180 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.933205 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.933213 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.933226 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:23 crc kubenswrapper[4897]: I0228 13:18:23.933234 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:23Z","lastTransitionTime":"2026-02-28T13:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.036208 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.036693 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.036705 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.036722 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.036734 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.076300 4897 generic.go:334] "Generic (PLEG): container finished" podID="6b8a404d-b143-4bf3-b590-c1b482f38f6f" containerID="56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d" exitCode=0 Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.076371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" event={"ID":"6b8a404d-b143-4bf3-b590-c1b482f38f6f","Type":"ContainerDied","Data":"56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.097416 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.113256 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.128381 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.139706 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.139754 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.139766 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.139785 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.139796 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.146466 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.206461 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.217610 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.242118 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.242148 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.242160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.242179 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.242194 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.243881 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.264187 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.282553 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.300919 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.317870 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.337675 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.344738 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.344777 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.344785 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.344799 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.344808 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.353382 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.364445 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.373936 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.387091 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:24Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.447978 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.448039 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.448059 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.448086 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.448103 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.455911 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.455975 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.456000 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.455978 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:24 crc kubenswrapper[4897]: E0228 13:18:24.456095 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:24 crc kubenswrapper[4897]: E0228 13:18:24.456251 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:24 crc kubenswrapper[4897]: E0228 13:18:24.456410 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:24 crc kubenswrapper[4897]: E0228 13:18:24.456545 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.550738 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.550803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.550821 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.550846 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.550864 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.654218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.654277 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.654294 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.654352 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.654378 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.759361 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.759413 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.759432 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.759455 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.759471 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.862693 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.862746 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.862767 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.862794 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.862812 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.966174 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.966228 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.966247 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.966271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:24 crc kubenswrapper[4897]: I0228 13:18:24.966288 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:24Z","lastTransitionTime":"2026-02-28T13:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.069513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.069570 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.069591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.069616 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.069634 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.083087 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/0.log" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.087488 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3" exitCode=1 Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.087617 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.088985 4897 scope.go:117] "RemoveContainer" containerID="d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.106897 4897 generic.go:334] "Generic (PLEG): container finished" podID="6b8a404d-b143-4bf3-b590-c1b482f38f6f" containerID="179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d" exitCode=0 Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.106958 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" event={"ID":"6b8a404d-b143-4bf3-b590-c1b482f38f6f","Type":"ContainerDied","Data":"179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.109510 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.128971 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.146081 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.175883 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.175933 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.175951 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.175976 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.175993 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.182964 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"message\\\":\\\"ng watch factory\\\\nI0228 13:18:24.349555 6699 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349692 6699 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349911 6699 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349920 6699 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349949 6699 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.350060 6699 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.350432 6699 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 13:18:24.350485 6699 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.213825 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.241271 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.257976 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.273231 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.283999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.284084 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.284109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.284145 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.284168 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.289706 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.309024 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.331505 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.349385 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.368673 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.386674 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.388363 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.388467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.388493 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.388563 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.388673 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.399216 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.414571 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.430386 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.445827 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.458930 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.477109 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.491452 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.491520 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.491544 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.491575 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.491597 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.491636 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.509724 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.528977 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.543218 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.559024 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.589085 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"message\\\":\\\"ng watch factory\\\\nI0228 13:18:24.349555 6699 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349692 6699 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349911 6699 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349920 6699 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349949 6699 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.350060 6699 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.350432 6699 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 13:18:24.350485 6699 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.594095 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.594140 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.594153 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.594173 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.594185 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.621576 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.641439 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.655675 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.666910 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.684056 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.697288 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.697378 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.697394 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.697416 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.697432 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.709702 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:25Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.800132 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.800179 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.800197 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.800219 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.800234 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.903150 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.903186 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.903195 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.903209 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:25 crc kubenswrapper[4897]: I0228 13:18:25.903221 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:25Z","lastTransitionTime":"2026-02-28T13:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.006191 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.006257 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.006277 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.006305 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.006356 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:26Z","lastTransitionTime":"2026-02-28T13:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.110207 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.110262 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.110280 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.110303 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.110351 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:26Z","lastTransitionTime":"2026-02-28T13:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.119852 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/0.log" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.124246 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc"} Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.124772 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.129220 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" event={"ID":"6b8a404d-b143-4bf3-b590-c1b482f38f6f","Type":"ContainerStarted","Data":"4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2"} Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.139004 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.156136 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.175433 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.188965 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.208083 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.214526 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.214566 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.214581 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.214604 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.214620 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:26Z","lastTransitionTime":"2026-02-28T13:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.227001 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.256686 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"message\\\":\\\"ng watch factory\\\\nI0228 13:18:24.349555 6699 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349692 6699 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349911 6699 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349920 6699 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349949 6699 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.350060 6699 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.350432 6699 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 13:18:24.350485 6699 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.280057 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.294786 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.307177 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.317887 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.317951 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.317970 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.317997 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.318014 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:26Z","lastTransitionTime":"2026-02-28T13:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.319201 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.334743 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.352793 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.372561 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.384929 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.406305 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: E0228 13:18:26.418635 4897 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.420800 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.442716 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.455391 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:26 crc kubenswrapper[4897]: E0228 13:18:26.455528 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.455806 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.455818 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.456022 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:26 crc kubenswrapper[4897]: E0228 13:18:26.456440 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:26 crc kubenswrapper[4897]: E0228 13:18:26.456399 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:26 crc kubenswrapper[4897]: E0228 13:18:26.456649 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.463561 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.483732 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.503942 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.534514 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"message\\\":\\\"ng watch factory\\\\nI0228 13:18:24.349555 6699 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349692 6699 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349911 6699 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349920 6699 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349949 6699 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.350060 6699 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.350432 6699 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 13:18:24.350485 6699 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.566244 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.588409 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.610137 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.625925 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.643525 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.666602 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.683464 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.702332 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.720898 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.743399 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.761854 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.782576 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.799758 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.823049 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.845820 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.868616 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: E0228 13:18:26.876821 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.891129 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.924536 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"message\\\":\\\"ng watch factory\\\\nI0228 13:18:24.349555 6699 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349692 6699 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349911 6699 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349920 6699 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349949 6699 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.350060 6699 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.350432 6699 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 13:18:24.350485 6699 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.958166 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.980456 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:26 crc kubenswrapper[4897]: I0228 13:18:26.996909 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.012458 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.027685 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.047355 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.068674 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.085790 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.136204 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/1.log" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.137469 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/0.log" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.141683 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc" exitCode=1 Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.141742 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc"} Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.141837 4897 scope.go:117] "RemoveContainer" containerID="d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.143186 4897 scope.go:117] "RemoveContainer" containerID="dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc" Feb 28 13:18:27 crc kubenswrapper[4897]: E0228 13:18:27.143748 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.174140 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.195213 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.211737 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.234196 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.265800 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1f43cb9f94a8218d06ffab17a61ff757706e312eecbbd10f39aca38f910bed3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"message\\\":\\\"ng watch factory\\\\nI0228 13:18:24.349555 6699 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349692 6699 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349911 6699 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.349920 6699 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.349949 6699 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 13:18:24.350060 6699 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 13:18:24.350432 6699 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 13:18:24.350485 6699 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.254713 6977 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.255094 6977 obj_retry.go:551] Creating *factory.egressNode crc took: 2.040121ms\\\\nI0228 13:18:26.255125 6977 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:26.255156 6977 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:26.255506 6977 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:26.255618 6977 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:26.255668 6977 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:26.255698 6977 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:26.255817 6977 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.299708 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.321514 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.342154 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.361682 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.379830 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.400870 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.416960 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.433934 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.453762 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.479126 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.497712 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.891045 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.891122 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.891141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.891165 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.891184 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:27Z","lastTransitionTime":"2026-02-28T13:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:27 crc kubenswrapper[4897]: E0228 13:18:27.912041 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.917657 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.917726 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.917743 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.917770 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.917788 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:27Z","lastTransitionTime":"2026-02-28T13:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:27 crc kubenswrapper[4897]: E0228 13:18:27.939437 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.945209 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.945297 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.945353 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.945378 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.945394 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:27Z","lastTransitionTime":"2026-02-28T13:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:27 crc kubenswrapper[4897]: E0228 13:18:27.966515 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.972581 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.972643 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.972662 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.972689 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.972708 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:27Z","lastTransitionTime":"2026-02-28T13:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:27 crc kubenswrapper[4897]: E0228 13:18:27.993049 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:27Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.997995 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.998037 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.998054 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.998079 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:27 crc kubenswrapper[4897]: I0228 13:18:27.998097 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:27Z","lastTransitionTime":"2026-02-28T13:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:28 crc kubenswrapper[4897]: E0228 13:18:28.018847 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: E0228 13:18:28.019094 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.148466 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/1.log" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.153943 4897 scope.go:117] "RemoveContainer" containerID="dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc" Feb 28 13:18:28 crc kubenswrapper[4897]: E0228 13:18:28.154197 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.175019 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.195462 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.213631 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.252474 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.277073 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.301206 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.322266 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.342459 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.376103 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.254713 6977 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.255094 6977 obj_retry.go:551] Creating *factory.egressNode crc took: 2.040121ms\\\\nI0228 13:18:26.255125 6977 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:26.255156 6977 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:26.255506 6977 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:26.255618 6977 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:26.255668 6977 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:26.255698 6977 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:26.255817 6977 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.399101 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.417602 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.433394 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.455816 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.456221 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:28 crc kubenswrapper[4897]: E0228 13:18:28.456399 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.456714 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:28 crc kubenswrapper[4897]: E0228 13:18:28.456836 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.457239 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:28 crc kubenswrapper[4897]: E0228 13:18:28.457391 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.457547 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:28 crc kubenswrapper[4897]: E0228 13:18:28.457787 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.483511 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.510650 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:28 crc kubenswrapper[4897]: I0228 13:18:28.524198 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:28Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:30 crc kubenswrapper[4897]: I0228 13:18:30.455558 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:30 crc kubenswrapper[4897]: I0228 13:18:30.455602 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:30 crc kubenswrapper[4897]: E0228 13:18:30.455786 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:30 crc kubenswrapper[4897]: I0228 13:18:30.455975 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:30 crc kubenswrapper[4897]: E0228 13:18:30.456055 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:30 crc kubenswrapper[4897]: E0228 13:18:30.456166 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:30 crc kubenswrapper[4897]: I0228 13:18:30.456456 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:30 crc kubenswrapper[4897]: E0228 13:18:30.456753 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:31 crc kubenswrapper[4897]: I0228 13:18:31.469934 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 28 13:18:31 crc kubenswrapper[4897]: E0228 13:18:31.878090 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:18:32 crc kubenswrapper[4897]: I0228 13:18:32.455634 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:32 crc kubenswrapper[4897]: I0228 13:18:32.455756 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:32 crc kubenswrapper[4897]: I0228 13:18:32.455763 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:32 crc kubenswrapper[4897]: I0228 13:18:32.455793 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:32 crc kubenswrapper[4897]: E0228 13:18:32.456117 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:32 crc kubenswrapper[4897]: E0228 13:18:32.456627 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:32 crc kubenswrapper[4897]: E0228 13:18:32.456726 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:32 crc kubenswrapper[4897]: E0228 13:18:32.456501 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:33 crc kubenswrapper[4897]: I0228 13:18:33.470718 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.455839 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.455906 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.455836 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.456016 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.456304 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.456644 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.456684 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.456832 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.763379 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.763585 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.763646 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:19:06.763612567 +0000 UTC m=+161.005933234 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.763749 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.763775 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.763793 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.763788 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.763858 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 13:19:06.763836903 +0000 UTC m=+161.006157590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.763889 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.764001 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.764016 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.764047 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:19:06.764032599 +0000 UTC m=+161.006353286 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.764065 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:19:06.7640564 +0000 UTC m=+161.006377177 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.865071 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.865362 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.865427 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.865452 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:34 crc kubenswrapper[4897]: I0228 13:18:34.865377 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.865528 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 13:19:06.865505763 +0000 UTC m=+161.107826460 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.865800 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:34 crc kubenswrapper[4897]: E0228 13:18:34.865944 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs podName:8b95b3e0-28e1-4b26-86a3-bd61c5528b3e nodeName:}" failed. No retries permitted until 2026-02-28 13:19:06.865926715 +0000 UTC m=+161.108247382 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs") pod "network-metrics-daemon-5tms6" (UID: "8b95b3e0-28e1-4b26-86a3-bd61c5528b3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.455502 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:36 crc kubenswrapper[4897]: E0228 13:18:36.456211 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.455589 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:36 crc kubenswrapper[4897]: E0228 13:18:36.456534 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.455637 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:36 crc kubenswrapper[4897]: E0228 13:18:36.456794 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.455581 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:36 crc kubenswrapper[4897]: E0228 13:18:36.457155 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.480730 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.499995 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.519746 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.538585 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.576884 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.254713 6977 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.255094 6977 obj_retry.go:551] Creating *factory.egressNode crc took: 2.040121ms\\\\nI0228 13:18:26.255125 6977 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:26.255156 6977 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:26.255506 6977 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:26.255618 6977 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:26.255668 6977 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:26.255698 6977 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:26.255817 6977 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.604917 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.624531 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.637347 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.650922 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.668579 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.684182 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.703625 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.724747 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.741886 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.759034 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.776736 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.788244 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: I0228 13:18:36.803572 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:36Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:36 crc kubenswrapper[4897]: E0228 13:18:36.879725 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.474897 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.497242 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.519550 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.537193 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.553495 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.571210 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.591143 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.615278 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.639051 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.658699 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.678941 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.698364 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.714781 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.748255 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.770278 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.791119 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.810932 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.829394 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:37 crc kubenswrapper[4897]: I0228 13:18:37.855501 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.254713 6977 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.255094 6977 obj_retry.go:551] Creating *factory.egressNode crc took: 2.040121ms\\\\nI0228 13:18:26.255125 6977 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:26.255156 6977 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:26.255506 6977 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:26.255618 6977 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:26.255668 6977 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:26.255698 6977 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:26.255817 6977 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:37Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.142626 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.142697 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.142719 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.142749 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.142770 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:38Z","lastTransitionTime":"2026-02-28T13:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.163820 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:38Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.169249 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.169340 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.169366 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.169395 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.169417 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:38Z","lastTransitionTime":"2026-02-28T13:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.190459 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:38Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.195289 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.195408 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.195458 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.195482 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.195499 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:38Z","lastTransitionTime":"2026-02-28T13:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.214275 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:38Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.228253 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.228362 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.228382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.228924 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.228989 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:38Z","lastTransitionTime":"2026-02-28T13:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.249471 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:38Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.254583 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.254646 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.254727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.254808 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.254832 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:38Z","lastTransitionTime":"2026-02-28T13:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.274428 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:38Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.274667 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.455999 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.456079 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.456105 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:38 crc kubenswrapper[4897]: I0228 13:18:38.456170 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.456403 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.456736 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.457004 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:38 crc kubenswrapper[4897]: E0228 13:18:38.457220 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:40 crc kubenswrapper[4897]: I0228 13:18:40.455530 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:40 crc kubenswrapper[4897]: I0228 13:18:40.455694 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:40 crc kubenswrapper[4897]: I0228 13:18:40.455759 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:40 crc kubenswrapper[4897]: I0228 13:18:40.455787 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:40 crc kubenswrapper[4897]: E0228 13:18:40.457352 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:40 crc kubenswrapper[4897]: E0228 13:18:40.457697 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:40 crc kubenswrapper[4897]: E0228 13:18:40.458080 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:40 crc kubenswrapper[4897]: E0228 13:18:40.458268 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:41 crc kubenswrapper[4897]: E0228 13:18:41.880718 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:18:42 crc kubenswrapper[4897]: I0228 13:18:42.456218 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:42 crc kubenswrapper[4897]: I0228 13:18:42.456292 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:42 crc kubenswrapper[4897]: I0228 13:18:42.456624 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:42 crc kubenswrapper[4897]: I0228 13:18:42.456670 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:42 crc kubenswrapper[4897]: E0228 13:18:42.456834 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:42 crc kubenswrapper[4897]: E0228 13:18:42.457129 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:42 crc kubenswrapper[4897]: E0228 13:18:42.457632 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:42 crc kubenswrapper[4897]: E0228 13:18:42.457738 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:42 crc kubenswrapper[4897]: I0228 13:18:42.458060 4897 scope.go:117] "RemoveContainer" containerID="dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.209642 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/1.log" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.212646 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969"} Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.213362 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.238728 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.260463 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.279271 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.295408 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.312501 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.332193 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.354578 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.376504 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.395860 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.416965 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.437810 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.457561 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.495560 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.521035 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.545447 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.567408 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.587924 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:43 crc kubenswrapper[4897]: I0228 13:18:43.622035 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.254713 6977 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.255094 6977 obj_retry.go:551] Creating *factory.egressNode crc took: 2.040121ms\\\\nI0228 13:18:26.255125 6977 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:26.255156 6977 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:26.255506 6977 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:26.255618 6977 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:26.255668 6977 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:26.255698 6977 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:26.255817 6977 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:43Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.217850 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/2.log" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.219358 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/1.log" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.222122 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969" exitCode=1 Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.222168 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969"} Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.222213 4897 scope.go:117] "RemoveContainer" containerID="dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.223775 4897 scope.go:117] "RemoveContainer" containerID="d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969" Feb 28 13:18:44 crc kubenswrapper[4897]: E0228 13:18:44.224098 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.238130 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.248644 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.259699 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.274222 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.290884 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.304583 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.323699 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.341944 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.375463 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc2824abe814c977d0134f3d7f028bab172777adfc237fc0d1eb087193b0cacc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.254713 6977 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:26.255094 6977 obj_retry.go:551] Creating *factory.egressNode crc took: 2.040121ms\\\\nI0228 13:18:26.255125 6977 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:26.255156 6977 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:26.255506 6977 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:26.255618 6977 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:26.255668 6977 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:26.255698 6977 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:26.255817 6977 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.411404 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.432932 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.448180 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.455710 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.455778 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:44 crc kubenswrapper[4897]: E0228 13:18:44.455846 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.455922 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.456099 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:44 crc kubenswrapper[4897]: E0228 13:18:44.456086 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:44 crc kubenswrapper[4897]: E0228 13:18:44.456165 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:44 crc kubenswrapper[4897]: E0228 13:18:44.456242 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.462156 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.477069 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.494024 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.510428 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.530443 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:44 crc kubenswrapper[4897]: I0228 13:18:44.547810 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:44Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.229889 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/2.log" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.234615 4897 scope.go:117] "RemoveContainer" containerID="d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969" Feb 28 13:18:45 crc kubenswrapper[4897]: E0228 13:18:45.234885 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.257255 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.271890 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.291416 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.309795 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.325007 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.353552 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.391236 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.407199 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.420050 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.433808 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.455032 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.487515 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.507818 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.523255 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.538679 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.555649 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.574563 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:45 crc kubenswrapper[4897]: I0228 13:18:45.598074 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:45Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.456063 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:46 crc kubenswrapper[4897]: E0228 13:18:46.456239 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.456296 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.456293 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.456388 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:46 crc kubenswrapper[4897]: E0228 13:18:46.456517 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:46 crc kubenswrapper[4897]: E0228 13:18:46.456697 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:46 crc kubenswrapper[4897]: E0228 13:18:46.456882 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.479111 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.495208 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.513075 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.530509 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.551580 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.572796 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.596628 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.613916 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.633120 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.651796 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.668741 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.687798 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.708435 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.728429 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.748011 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.767755 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.798014 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: I0228 13:18:46.829648 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:46Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:46 crc kubenswrapper[4897]: E0228 13:18:46.882839 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.455625 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.455673 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.455646 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.455739 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.455933 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.456060 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.456196 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.456606 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.610878 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.611222 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.611232 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.611246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.611256 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:48Z","lastTransitionTime":"2026-02-28T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.631855 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:48Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.637212 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.637254 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.637265 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.637282 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.637293 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:48Z","lastTransitionTime":"2026-02-28T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.654054 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:48Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.659213 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.659247 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.659258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.659273 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.659284 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:48Z","lastTransitionTime":"2026-02-28T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.679141 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:48Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.683516 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.683535 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.683542 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.683554 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.683562 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:48Z","lastTransitionTime":"2026-02-28T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.704960 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:48Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.710530 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.710573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.710591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.710614 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:48 crc kubenswrapper[4897]: I0228 13:18:48.710632 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:48Z","lastTransitionTime":"2026-02-28T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.730746 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:48Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:48 crc kubenswrapper[4897]: E0228 13:18:48.730969 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:18:50 crc kubenswrapper[4897]: I0228 13:18:50.455666 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:50 crc kubenswrapper[4897]: I0228 13:18:50.455772 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:50 crc kubenswrapper[4897]: I0228 13:18:50.455713 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:50 crc kubenswrapper[4897]: E0228 13:18:50.455940 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:50 crc kubenswrapper[4897]: I0228 13:18:50.455960 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:50 crc kubenswrapper[4897]: E0228 13:18:50.456167 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:50 crc kubenswrapper[4897]: E0228 13:18:50.456397 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:50 crc kubenswrapper[4897]: E0228 13:18:50.456575 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:51 crc kubenswrapper[4897]: E0228 13:18:51.883659 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:18:52 crc kubenswrapper[4897]: I0228 13:18:52.456534 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:52 crc kubenswrapper[4897]: I0228 13:18:52.456593 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:52 crc kubenswrapper[4897]: E0228 13:18:52.456647 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:52 crc kubenswrapper[4897]: I0228 13:18:52.456545 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:52 crc kubenswrapper[4897]: I0228 13:18:52.456820 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:52 crc kubenswrapper[4897]: E0228 13:18:52.457057 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:52 crc kubenswrapper[4897]: E0228 13:18:52.457178 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:52 crc kubenswrapper[4897]: E0228 13:18:52.457305 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:54 crc kubenswrapper[4897]: I0228 13:18:54.455574 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:54 crc kubenswrapper[4897]: I0228 13:18:54.455649 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:54 crc kubenswrapper[4897]: I0228 13:18:54.455574 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:54 crc kubenswrapper[4897]: E0228 13:18:54.455797 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:54 crc kubenswrapper[4897]: I0228 13:18:54.455897 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:54 crc kubenswrapper[4897]: E0228 13:18:54.456096 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:54 crc kubenswrapper[4897]: E0228 13:18:54.456190 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:54 crc kubenswrapper[4897]: E0228 13:18:54.456386 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.455808 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.455927 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.456042 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:56 crc kubenswrapper[4897]: E0228 13:18:56.456156 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.456194 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:56 crc kubenswrapper[4897]: E0228 13:18:56.455939 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:56 crc kubenswrapper[4897]: E0228 13:18:56.456256 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:56 crc kubenswrapper[4897]: E0228 13:18:56.456305 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.477434 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.497757 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.516543 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.535474 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.573777 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.609625 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.631696 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.652688 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.672561 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.687722 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.709283 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.731846 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.748668 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.765211 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.781391 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.800016 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.821225 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: I0228 13:18:56.844363 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:56Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:56 crc kubenswrapper[4897]: E0228 13:18:56.884609 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:18:57 crc kubenswrapper[4897]: I0228 13:18:57.456934 4897 scope.go:117] "RemoveContainer" containerID="d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969" Feb 28 13:18:57 crc kubenswrapper[4897]: E0228 13:18:57.457427 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.455733 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.455795 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:18:58 crc kubenswrapper[4897]: E0228 13:18:58.455920 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.456036 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.456084 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:18:58 crc kubenswrapper[4897]: E0228 13:18:58.456252 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:18:58 crc kubenswrapper[4897]: E0228 13:18:58.456373 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:18:58 crc kubenswrapper[4897]: E0228 13:18:58.456462 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.931864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.931925 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.931947 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.931978 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.932003 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:58Z","lastTransitionTime":"2026-02-28T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:58 crc kubenswrapper[4897]: E0228 13:18:58.957702 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:58Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.962617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.962667 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.962680 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.962698 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.962711 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:58Z","lastTransitionTime":"2026-02-28T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:58 crc kubenswrapper[4897]: E0228 13:18:58.981537 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:58Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.986764 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.986834 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.986855 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.986949 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:58 crc kubenswrapper[4897]: I0228 13:18:58.986990 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:58Z","lastTransitionTime":"2026-02-28T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:59 crc kubenswrapper[4897]: E0228 13:18:59.008227 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:59Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.013246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.013289 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.013299 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.013329 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.013341 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:59Z","lastTransitionTime":"2026-02-28T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:59 crc kubenswrapper[4897]: E0228 13:18:59.031542 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:59Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.037077 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.037137 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.037160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.037188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:18:59 crc kubenswrapper[4897]: I0228 13:18:59.037208 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:18:59Z","lastTransitionTime":"2026-02-28T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:18:59 crc kubenswrapper[4897]: E0228 13:18:59.057145 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:18:59Z is after 2025-08-24T17:21:41Z" Feb 28 13:18:59 crc kubenswrapper[4897]: E0228 13:18:59.057421 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:19:00 crc kubenswrapper[4897]: I0228 13:19:00.456138 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:00 crc kubenswrapper[4897]: I0228 13:19:00.456210 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:00 crc kubenswrapper[4897]: I0228 13:19:00.456282 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:00 crc kubenswrapper[4897]: E0228 13:19:00.456376 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:00 crc kubenswrapper[4897]: I0228 13:19:00.456427 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:00 crc kubenswrapper[4897]: E0228 13:19:00.456622 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:00 crc kubenswrapper[4897]: E0228 13:19:00.456801 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:00 crc kubenswrapper[4897]: E0228 13:19:00.457032 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:01 crc kubenswrapper[4897]: E0228 13:19:01.886222 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:02 crc kubenswrapper[4897]: I0228 13:19:02.455855 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:02 crc kubenswrapper[4897]: E0228 13:19:02.456417 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:02 crc kubenswrapper[4897]: I0228 13:19:02.455934 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:02 crc kubenswrapper[4897]: E0228 13:19:02.456708 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:02 crc kubenswrapper[4897]: I0228 13:19:02.455878 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:02 crc kubenswrapper[4897]: E0228 13:19:02.456976 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:02 crc kubenswrapper[4897]: I0228 13:19:02.456027 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:02 crc kubenswrapper[4897]: E0228 13:19:02.457237 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.314823 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/0.log" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.314870 4897 generic.go:334] "Generic (PLEG): container finished" podID="cd164967-b99b-47d0-a691-7d8118fa81ce" containerID="02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717" exitCode=1 Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.314919 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k4m7f" event={"ID":"cd164967-b99b-47d0-a691-7d8118fa81ce","Type":"ContainerDied","Data":"02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717"} Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.315562 4897 scope.go:117] "RemoveContainer" containerID="02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.339780 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.361169 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"2026-02-28T13:18:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8\\\\n2026-02-28T13:18:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8 to /host/opt/cni/bin/\\\\n2026-02-28T13:18:19Z [verbose] multus-daemon started\\\\n2026-02-28T13:18:19Z [verbose] Readiness Indicator file check\\\\n2026-02-28T13:19:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.378502 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.393521 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.410426 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.423947 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.443390 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.456256 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.456256 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.456442 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:04 crc kubenswrapper[4897]: E0228 13:19:04.456640 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.456789 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:04 crc kubenswrapper[4897]: E0228 13:19:04.456908 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:04 crc kubenswrapper[4897]: E0228 13:19:04.457123 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:04 crc kubenswrapper[4897]: E0228 13:19:04.457494 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.464091 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.482258 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.497897 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.518245 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.534632 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.568696 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.590383 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.610501 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.630143 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.649008 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:04 crc kubenswrapper[4897]: I0228 13:19:04.680192 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:04Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.319942 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/0.log" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.320006 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k4m7f" event={"ID":"cd164967-b99b-47d0-a691-7d8118fa81ce","Type":"ContainerStarted","Data":"56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5"} Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.352746 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.369625 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.388463 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.402381 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.417372 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.445398 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.459146 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.473435 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"2026-02-28T13:18:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8\\\\n2026-02-28T13:18:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8 to /host/opt/cni/bin/\\\\n2026-02-28T13:18:19Z [verbose] multus-daemon started\\\\n2026-02-28T13:18:19Z [verbose] Readiness Indicator file check\\\\n2026-02-28T13:19:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:19:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.485923 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.498848 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.513616 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.535554 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.556284 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.572370 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.589428 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.603644 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.656418 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:05 crc kubenswrapper[4897]: I0228 13:19:05.677089 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:05Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.455666 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.455827 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.456010 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.455987 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.456136 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.456258 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.456396 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.456642 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.476701 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.500019 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.516173 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.535097 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.556619 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.576734 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.595007 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.618356 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.637255 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.654191 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.669298 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.693231 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.719374 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.739470 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"2026-02-28T13:18:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8\\\\n2026-02-28T13:18:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8 to /host/opt/cni/bin/\\\\n2026-02-28T13:18:19Z [verbose] multus-daemon started\\\\n2026-02-28T13:18:19Z [verbose] Readiness Indicator file check\\\\n2026-02-28T13:19:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:19:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.753450 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.766804 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.784553 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.805180 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:06Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.849721 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.849882 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.849921 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:10.849889934 +0000 UTC m=+225.092210661 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.849970 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.850032 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.850067 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.850107 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:20:10.85008485 +0000 UTC m=+225.092405547 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.850213 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.850237 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.850366 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.850381 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.850408 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 13:20:10.850378968 +0000 UTC m=+225.092699665 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.850848 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 13:20:10.850830882 +0000 UTC m=+225.093151579 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.887769 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.950683 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:06 crc kubenswrapper[4897]: I0228 13:19:06.950754 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.950848 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.950867 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.950899 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.950915 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs podName:8b95b3e0-28e1-4b26-86a3-bd61c5528b3e nodeName:}" failed. No retries permitted until 2026-02-28 13:20:10.950894632 +0000 UTC m=+225.193215379 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs") pod "network-metrics-daemon-5tms6" (UID: "8b95b3e0-28e1-4b26-86a3-bd61c5528b3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.950918 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:19:06 crc kubenswrapper[4897]: E0228 13:19:06.950976 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 13:20:10.950958484 +0000 UTC m=+225.193279171 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 13:19:08 crc kubenswrapper[4897]: I0228 13:19:08.456733 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:08 crc kubenswrapper[4897]: I0228 13:19:08.456831 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:08 crc kubenswrapper[4897]: I0228 13:19:08.456855 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:08 crc kubenswrapper[4897]: E0228 13:19:08.457038 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:08 crc kubenswrapper[4897]: E0228 13:19:08.457271 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:08 crc kubenswrapper[4897]: E0228 13:19:08.457444 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:08 crc kubenswrapper[4897]: I0228 13:19:08.457867 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:08 crc kubenswrapper[4897]: E0228 13:19:08.458027 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.363838 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.363935 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.363955 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.363981 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.363998 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:09Z","lastTransitionTime":"2026-02-28T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:09 crc kubenswrapper[4897]: E0228 13:19:09.384767 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:09Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.389445 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.389512 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.389536 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.389570 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.389592 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:09Z","lastTransitionTime":"2026-02-28T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:09 crc kubenswrapper[4897]: E0228 13:19:09.410959 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:09Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.415771 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.415832 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.415850 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.415873 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.415890 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:09Z","lastTransitionTime":"2026-02-28T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:09 crc kubenswrapper[4897]: E0228 13:19:09.436088 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:09Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.440920 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.440982 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.441000 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.441023 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.441041 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:09Z","lastTransitionTime":"2026-02-28T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:09 crc kubenswrapper[4897]: E0228 13:19:09.461828 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:09Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.466275 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.466349 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.466368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.466391 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:09 crc kubenswrapper[4897]: I0228 13:19:09.466411 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:09Z","lastTransitionTime":"2026-02-28T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:09 crc kubenswrapper[4897]: E0228 13:19:09.481887 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:09Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:09 crc kubenswrapper[4897]: E0228 13:19:09.482209 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:19:10 crc kubenswrapper[4897]: I0228 13:19:10.455917 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:10 crc kubenswrapper[4897]: I0228 13:19:10.455974 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:10 crc kubenswrapper[4897]: E0228 13:19:10.456147 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:10 crc kubenswrapper[4897]: I0228 13:19:10.456216 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:10 crc kubenswrapper[4897]: I0228 13:19:10.456281 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:10 crc kubenswrapper[4897]: E0228 13:19:10.456519 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:10 crc kubenswrapper[4897]: E0228 13:19:10.456618 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:10 crc kubenswrapper[4897]: E0228 13:19:10.457118 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:11 crc kubenswrapper[4897]: I0228 13:19:11.457580 4897 scope.go:117] "RemoveContainer" containerID="d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969" Feb 28 13:19:11 crc kubenswrapper[4897]: E0228 13:19:11.889392 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.348837 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/2.log" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.353417 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.354041 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.368224 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.384875 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.401811 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.420896 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.443590 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"2026-02-28T13:18:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8\\\\n2026-02-28T13:18:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8 to /host/opt/cni/bin/\\\\n2026-02-28T13:18:19Z [verbose] multus-daemon started\\\\n2026-02-28T13:18:19Z [verbose] Readiness Indicator file check\\\\n2026-02-28T13:19:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:19:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.455537 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.455629 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.455670 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.455740 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:12 crc kubenswrapper[4897]: E0228 13:19:12.455749 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:12 crc kubenswrapper[4897]: E0228 13:19:12.455845 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:12 crc kubenswrapper[4897]: E0228 13:19:12.455954 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:12 crc kubenswrapper[4897]: E0228 13:19:12.456054 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.460657 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.476407 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.492844 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.506561 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.520074 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.533982 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.547298 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.570372 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.591586 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.607764 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.635947 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.655839 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:12 crc kubenswrapper[4897]: I0228 13:19:12.671975 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:12Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.363271 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/3.log" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.364556 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/2.log" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.368052 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" exitCode=1 Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.368183 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.368291 4897 scope.go:117] "RemoveContainer" containerID="d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.369191 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:19:13 crc kubenswrapper[4897]: E0228 13:19:13.369467 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.422259 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.442411 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"2026-02-28T13:18:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8\\\\n2026-02-28T13:18:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8 to /host/opt/cni/bin/\\\\n2026-02-28T13:18:19Z [verbose] multus-daemon started\\\\n2026-02-28T13:18:19Z [verbose] Readiness Indicator file check\\\\n2026-02-28T13:19:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:19:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.456342 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.469877 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.484141 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.501734 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.517967 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.533465 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.552659 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.570579 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.588672 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.605484 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.632860 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3f3358de452c192d04f520d20cd75c8060ee9a983fe5123f9c975e15d814969\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:18:43Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 13:18:43.541272 7163 obj_retry.go:551] Creating *factory.egressNode crc took: 2.440002ms\\\\nI0228 13:18:43.541303 7163 factory.go:1336] Added *v1.Node event handler 7\\\\nI0228 13:18:43.541385 7163 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0228 13:18:43.541400 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 13:18:43.541431 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 13:18:43.541478 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0228 13:18:43.541494 7163 factory.go:656] Stopping watch factory\\\\nI0228 13:18:43.541698 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0228 13:18:43.541708 7163 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0228 13:18:43.541803 7163 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0228 13:18:43.541841 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0228 13:18:43.541867 7163 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0228 13:18:43.541931 7163 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:12Z\\\",\\\"message\\\":\\\"j_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8n99q after 0 failed attempt(s)\\\\nI0228 13:19:12.499177 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-brq22\\\\nI0228 13:19:12.499185 7468 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8n99q\\\\nI0228 13:19:12.499179 7468 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-5tms6 before timer (time: 2026-02-28 13:19:13.983546227 +0000 UTC m=+2.195967516): skip\\\\nI0228 13:19:12.499156 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0228 13:19:12.499165 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0228 13:19:12.499204 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0228 13:19:12.499207 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0228 13:19:12.499213 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0228 13:19:12.498997 7468 obj_retry.go:386] Retry successful for *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.662949 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.682559 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.698893 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.713749 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:13 crc kubenswrapper[4897]: I0228 13:19:13.729198 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:13Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.374940 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/3.log" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.379666 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:19:14 crc kubenswrapper[4897]: E0228 13:19:14.379877 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.401534 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.417680 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.431154 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.448370 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.456029 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.456081 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.456049 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:14 crc kubenswrapper[4897]: E0228 13:19:14.456150 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:14 crc kubenswrapper[4897]: E0228 13:19:14.456286 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:14 crc kubenswrapper[4897]: E0228 13:19:14.456344 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.456404 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:14 crc kubenswrapper[4897]: E0228 13:19:14.456620 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.464942 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.476166 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.487934 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.500680 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.510450 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.525087 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.554026 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:12Z\\\",\\\"message\\\":\\\"j_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8n99q after 0 failed attempt(s)\\\\nI0228 13:19:12.499177 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-brq22\\\\nI0228 13:19:12.499185 7468 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8n99q\\\\nI0228 13:19:12.499179 7468 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-5tms6 before timer (time: 2026-02-28 13:19:13.983546227 +0000 UTC m=+2.195967516): skip\\\\nI0228 13:19:12.499156 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0228 13:19:12.499165 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0228 13:19:12.499204 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0228 13:19:12.499207 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0228 13:19:12.499213 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0228 13:19:12.498997 7468 obj_retry.go:386] Retry successful for *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:19:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.581490 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.604264 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.620621 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.635108 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.649595 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.669811 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:14 crc kubenswrapper[4897]: I0228 13:19:14.693179 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"2026-02-28T13:18:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8\\\\n2026-02-28T13:18:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8 to /host/opt/cni/bin/\\\\n2026-02-28T13:18:19Z [verbose] multus-daemon started\\\\n2026-02-28T13:18:19Z [verbose] Readiness Indicator file check\\\\n2026-02-28T13:19:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:19:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:14Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.456029 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.456069 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.456079 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:16 crc kubenswrapper[4897]: E0228 13:19:16.456399 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.456436 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:16 crc kubenswrapper[4897]: E0228 13:19:16.456632 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:16 crc kubenswrapper[4897]: E0228 13:19:16.456884 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:16 crc kubenswrapper[4897]: E0228 13:19:16.456952 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.476232 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.499860 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.519646 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.539703 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.559452 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.578049 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.593559 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.624791 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.646303 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.666292 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.682196 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.699635 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.732493 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:12Z\\\",\\\"message\\\":\\\"j_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8n99q after 0 failed attempt(s)\\\\nI0228 13:19:12.499177 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-brq22\\\\nI0228 13:19:12.499185 7468 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8n99q\\\\nI0228 13:19:12.499179 7468 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-5tms6 before timer (time: 2026-02-28 13:19:13.983546227 +0000 UTC m=+2.195967516): skip\\\\nI0228 13:19:12.499156 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0228 13:19:12.499165 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0228 13:19:12.499204 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0228 13:19:12.499207 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0228 13:19:12.499213 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0228 13:19:12.498997 7468 obj_retry.go:386] Retry successful for *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:19:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.750036 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.773466 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"2026-02-28T13:18:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8\\\\n2026-02-28T13:18:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8 to /host/opt/cni/bin/\\\\n2026-02-28T13:18:19Z [verbose] multus-daemon started\\\\n2026-02-28T13:18:19Z [verbose] Readiness Indicator file check\\\\n2026-02-28T13:19:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:19:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.785563 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.798839 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: I0228 13:19:16.811798 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:16Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:16 crc kubenswrapper[4897]: E0228 13:19:16.890825 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:18 crc kubenswrapper[4897]: I0228 13:19:18.455713 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:18 crc kubenswrapper[4897]: I0228 13:19:18.455757 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:18 crc kubenswrapper[4897]: I0228 13:19:18.455771 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:18 crc kubenswrapper[4897]: E0228 13:19:18.455859 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:18 crc kubenswrapper[4897]: I0228 13:19:18.455918 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:18 crc kubenswrapper[4897]: E0228 13:19:18.456021 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:18 crc kubenswrapper[4897]: E0228 13:19:18.456065 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:18 crc kubenswrapper[4897]: E0228 13:19:18.456153 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.760126 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.760172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.760181 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.760199 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.760211 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:19Z","lastTransitionTime":"2026-02-28T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:19 crc kubenswrapper[4897]: E0228 13:19:19.774489 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:19Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.779704 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.779751 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.779764 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.779789 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.779808 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:19Z","lastTransitionTime":"2026-02-28T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:19 crc kubenswrapper[4897]: E0228 13:19:19.795536 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:19Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.799064 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.799101 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.799111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.799129 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.799140 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:19Z","lastTransitionTime":"2026-02-28T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:19 crc kubenswrapper[4897]: E0228 13:19:19.813486 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:19Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.818434 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.818489 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.818503 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.818527 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.818543 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:19Z","lastTransitionTime":"2026-02-28T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:19 crc kubenswrapper[4897]: E0228 13:19:19.836362 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:19Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.842261 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.842353 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.842382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.842414 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:19 crc kubenswrapper[4897]: I0228 13:19:19.842435 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:19Z","lastTransitionTime":"2026-02-28T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:19 crc kubenswrapper[4897]: E0228 13:19:19.863053 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:19Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:19 crc kubenswrapper[4897]: E0228 13:19:19.863198 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:19:20 crc kubenswrapper[4897]: I0228 13:19:20.455847 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:20 crc kubenswrapper[4897]: I0228 13:19:20.455903 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:20 crc kubenswrapper[4897]: I0228 13:19:20.455847 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:20 crc kubenswrapper[4897]: I0228 13:19:20.456036 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:20 crc kubenswrapper[4897]: E0228 13:19:20.456093 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:20 crc kubenswrapper[4897]: E0228 13:19:20.456145 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:20 crc kubenswrapper[4897]: E0228 13:19:20.456751 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:20 crc kubenswrapper[4897]: E0228 13:19:20.456854 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:21 crc kubenswrapper[4897]: E0228 13:19:21.892703 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:22 crc kubenswrapper[4897]: I0228 13:19:22.455980 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:22 crc kubenswrapper[4897]: E0228 13:19:22.456135 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:22 crc kubenswrapper[4897]: I0228 13:19:22.456378 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:22 crc kubenswrapper[4897]: E0228 13:19:22.456439 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:22 crc kubenswrapper[4897]: I0228 13:19:22.456611 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:22 crc kubenswrapper[4897]: I0228 13:19:22.456666 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:22 crc kubenswrapper[4897]: E0228 13:19:22.456823 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:22 crc kubenswrapper[4897]: E0228 13:19:22.456960 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:23 crc kubenswrapper[4897]: I0228 13:19:23.477997 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 28 13:19:24 crc kubenswrapper[4897]: I0228 13:19:24.455676 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:24 crc kubenswrapper[4897]: E0228 13:19:24.455838 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:24 crc kubenswrapper[4897]: I0228 13:19:24.455944 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:24 crc kubenswrapper[4897]: I0228 13:19:24.455978 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:24 crc kubenswrapper[4897]: I0228 13:19:24.456016 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:24 crc kubenswrapper[4897]: E0228 13:19:24.456131 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:24 crc kubenswrapper[4897]: E0228 13:19:24.456261 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:24 crc kubenswrapper[4897]: E0228 13:19:24.456361 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.455336 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.455387 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.455459 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:26 crc kubenswrapper[4897]: E0228 13:19:26.455462 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:26 crc kubenswrapper[4897]: E0228 13:19:26.455558 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.455606 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:26 crc kubenswrapper[4897]: E0228 13:19:26.455665 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:26 crc kubenswrapper[4897]: E0228 13:19:26.455721 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.481519 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23b07e53-7e49-4e08-b346-8cd575b7f2ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e9a3dd1f8ee7245b4185ab8828852983776a3725c0e0202031f019e4d1cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd666d737612906cd221c611576137054485e66782603973709d756be628e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c564d2a6ac76545e63bc8c1cf09ebbad28f954f1986e6a6a272c23bda9207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16305ee570c789c9b80ebffbea166bdb4fc0aa0ccddb48f8733c196c5bbc83da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0328bcb49b08da8ee30c55848f414ab75bd375c851b0e6e9faecfb47fe9d97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0dbf86331be25d315661bdd0184077ca9230cb4aad20056a90f662b1c7a4fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f33515dccceec4bef9a4fc951739f22443ef85eea35023a8c5339a40197800d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47d485d99749080374cb06bdcd03d7fe30753c85a013be12ed13e5e1b46c335d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.496545 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38c76969-d16d-46f5-b96a-922ebfb0a5da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:17:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 13:17:35.239122 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 13:17:35.239232 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 13:17:35.239893 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2443145259/tls.crt::/tmp/serving-cert-2443145259/tls.key\\\\\\\"\\\\nI0228 13:17:35.541500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 13:17:35.544872 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 13:17:35.544912 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 13:17:35.544959 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 13:17:35.544976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 13:17:35.551581 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 13:17:35.551618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 13:17:35.551627 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551638 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 13:17:35.551648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 13:17:35.551657 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 13:17:35.551664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 13:17:35.551670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 13:17:35.554143 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:17:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.512804 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dbe604993d7b055554d178d44acee35a2d78e25a60990d0c06e08a983faab0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.524657 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.540027 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1350035568b4df04bb70c40651627190ef9b62281558e327bf1d2090b2fee78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.570578 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e63af1c-1b83-44b6-9872-2dfefa37d433\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:12Z\\\",\\\"message\\\":\\\"j_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8n99q after 0 failed attempt(s)\\\\nI0228 13:19:12.499177 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-brq22\\\\nI0228 13:19:12.499185 7468 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8n99q\\\\nI0228 13:19:12.499179 7468 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-5tms6 before timer (time: 2026-02-28 13:19:13.983546227 +0000 UTC m=+2.195967516): skip\\\\nI0228 13:19:12.499156 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0228 13:19:12.499165 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0228 13:19:12.499204 7468 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0228 13:19:12.499207 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0228 13:19:12.499213 7468 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0228 13:19:12.498997 7468 obj_retry.go:386] Retry successful for *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:19:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rjlcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.590087 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc626447-1d60-43c4-8044-186de1ded22f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11284409cbb8172e945c8b65d2338be735ee07187addb55fec10469afe5cefae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c348af3f14fc9e11ac7c0e7a96fcc3bd7f01afec95bd274df460cff777498422\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T13:16:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 13:16:28.426168 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 13:16:28.430441 1 observer_polling.go:159] Starting file observer\\\\nI0228 13:16:28.468584 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 13:16:28.473941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 13:16:52.994573 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 13:16:52.994678 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3bdd363306ea660e09290a1c17ac8386e08a240c8255a4ec6de0b7fa1d6ee78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d05965131a6a61a7831069772cea99a7a0d6555aa5c42b3d5ca15d675676f5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.608804 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k4m7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd164967-b99b-47d0-a691-7d8118fa81ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T13:19:04Z\\\",\\\"message\\\":\\\"2026-02-28T13:18:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8\\\\n2026-02-28T13:18:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d5a1cb8e-73e2-4756-8609-143ac434e5b8 to /host/opt/cni/bin/\\\\n2026-02-28T13:18:19Z [verbose] multus-daemon started\\\\n2026-02-28T13:18:19Z [verbose] Readiness Indicator file check\\\\n2026-02-28T13:19:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:19:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k4m7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.618964 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8n99q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844e4a2-e296-46c1-b047-ace0be3d95bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://374faf704c18c26835f7cdc27476492ad368e5b02f083544912cade52403d7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6plnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8n99q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.630427 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kb42x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ce402ca-1bea-4568-85cd-fb4a726f3c92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc184d2423d3fcb2e76df425e0dc9f0993a521f07df2dba1a89a20cb473b0fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fs4zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kb42x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.644267 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a273d93c-239a-444c-83cf-2c4ce34fa47b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8228c810c9a5e735daa520594b848b640ba628ceff9f9c7e1c1e83e8f0298b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80dbc6ed9af9a82df0ca46be021ef902caa69663cbb2028484d3af6e06ecab15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljj4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bts94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.658362 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17ccb507-8b4e-4294-a79e-465d3e17ea1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed409410f11bb36ef18ebe7d8ac2d239b6821eaa1dfed94692ed27e06b4ece50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d295150d16c402f14c3c67c120c28a2af6c908f52f6bbd462c41105d2a85d9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d295150d16c402f14c3c67c120c28a2af6c908f52f6bbd462c41105d2a85d9a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.673339 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.688518 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b8a404d-b143-4bf3-b590-c1b482f38f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dda349d1563eeabb8c512ca1ccf28baaa33949460cb1688a3083a8c111713f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512f63d50a798a7191df148d6fc7669f4f786f8ae7412ea6278ebeb2ffdb3b32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498ef4176145fa6a1e24d20f002bb47ea216445ba23a4efe85736d7156cccef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://223b16972d147317b72ae36608fabb55e7cf05003e24debe89da1bc7df285263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15e5800380d8bbba8afb0269055095c53ccc0807a46252d259f70d8daea406f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56fcd41910feed63b2e64714754be4f28deffeefac206f835f1f0bf83d7dc96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://179c925ac889402a9bf58b44dd73be9e6091872f4c2496e860f4f98c8599c52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:18:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:18:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zj7fc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.699674 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4091e4-3a55-4913-81f3-026a1a97c57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42ab4a011ee15a60fbda8c13d9c23fb10eaf91119c8d9eb6b4c8dc871fcfffd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh6dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brq22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.714684 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3cf6453-ea2d-4bd2-b086-fa396dd82b70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:17:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:16:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8af0215561a1b01ae318b40800de6afba504e149f87ace68c8a481abd66712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61775466ffc1823f865281b6bb4aff1e9769d9ed663eb2cffc6499e5d7a80549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93eb1d27bca062fcd34fec5ed888121040d1591d47e8e096596aa941980e73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49b9f2058411062a5d44c7b7cc80ca716519919d350632dd6d9522404991279b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T13:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T13:16:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:16:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.730732 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.747995 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a44acae997c6d9cb73f6b82b8ab2643b21f8cd8ccded9e40004e180915d4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d692ba16a2b7b12b74015fd6e678ac54385aa5905da3d4de5bef79f2648b7097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T13:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: I0228 13:19:26.764004 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5tms6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T13:18:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gz5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T13:18:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5tms6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:26Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:26 crc kubenswrapper[4897]: E0228 13:19:26.894551 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:27 crc kubenswrapper[4897]: I0228 13:19:27.456995 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:19:27 crc kubenswrapper[4897]: E0228 13:19:27.457272 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:19:28 crc kubenswrapper[4897]: I0228 13:19:28.456149 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:28 crc kubenswrapper[4897]: I0228 13:19:28.456149 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:28 crc kubenswrapper[4897]: I0228 13:19:28.456254 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:28 crc kubenswrapper[4897]: I0228 13:19:28.456286 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:28 crc kubenswrapper[4897]: E0228 13:19:28.456402 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:28 crc kubenswrapper[4897]: E0228 13:19:28.456521 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:28 crc kubenswrapper[4897]: E0228 13:19:28.456718 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:28 crc kubenswrapper[4897]: E0228 13:19:28.456791 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.251230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.251345 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.251374 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.251406 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.251431 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:30Z","lastTransitionTime":"2026-02-28T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.271652 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:30Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.276571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.276638 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.276657 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.276682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.276699 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:30Z","lastTransitionTime":"2026-02-28T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.296792 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:30Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.308738 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.308808 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.308827 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.308854 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.308871 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:30Z","lastTransitionTime":"2026-02-28T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.331016 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:30Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.335784 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.335839 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.335857 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.335880 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.335896 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:30Z","lastTransitionTime":"2026-02-28T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.356182 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:30Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.360481 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.360530 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.360549 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.360570 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.360587 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:30Z","lastTransitionTime":"2026-02-28T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.380939 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T13:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d2fd8fce-c625-452e-ac59-c8b16ad2bd1e\\\",\\\"systemUUID\\\":\\\"9a2b8aa6-89dd-4912-990f-d37ff5df66a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T13:19:30Z is after 2025-08-24T17:21:41Z" Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.381148 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.455813 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.455932 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.456228 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.456219 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:30 crc kubenswrapper[4897]: I0228 13:19:30.456258 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.456553 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.456697 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:30 crc kubenswrapper[4897]: E0228 13:19:30.456969 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:31 crc kubenswrapper[4897]: E0228 13:19:31.896339 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:32 crc kubenswrapper[4897]: I0228 13:19:32.455466 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:32 crc kubenswrapper[4897]: I0228 13:19:32.455519 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:32 crc kubenswrapper[4897]: I0228 13:19:32.455585 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:32 crc kubenswrapper[4897]: E0228 13:19:32.455590 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:32 crc kubenswrapper[4897]: I0228 13:19:32.455482 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:32 crc kubenswrapper[4897]: E0228 13:19:32.455650 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:32 crc kubenswrapper[4897]: E0228 13:19:32.455707 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:32 crc kubenswrapper[4897]: E0228 13:19:32.455840 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:34 crc kubenswrapper[4897]: I0228 13:19:34.455991 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:34 crc kubenswrapper[4897]: I0228 13:19:34.456026 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:34 crc kubenswrapper[4897]: I0228 13:19:34.456052 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:34 crc kubenswrapper[4897]: E0228 13:19:34.456179 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:34 crc kubenswrapper[4897]: I0228 13:19:34.456221 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:34 crc kubenswrapper[4897]: E0228 13:19:34.456300 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:34 crc kubenswrapper[4897]: E0228 13:19:34.456519 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:34 crc kubenswrapper[4897]: E0228 13:19:34.456707 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.455473 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.455539 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.455681 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:36 crc kubenswrapper[4897]: E0228 13:19:36.455879 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:36 crc kubenswrapper[4897]: E0228 13:19:36.456201 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:36 crc kubenswrapper[4897]: E0228 13:19:36.456350 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.456514 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:36 crc kubenswrapper[4897]: E0228 13:19:36.456675 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.508734 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.508697717 podStartE2EDuration="13.508697717s" podCreationTimestamp="2026-02-28 13:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.483807478 +0000 UTC m=+190.726128165" watchObservedRunningTime="2026-02-28 13:19:36.508697717 +0000 UTC m=+190.751018424" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.558464 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podStartSLOduration=134.558435893 podStartE2EDuration="2m14.558435893s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.557729462 +0000 UTC m=+190.800050159" watchObservedRunningTime="2026-02-28 13:19:36.558435893 +0000 UTC m=+190.800756580" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.558727 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zj7fc" podStartSLOduration=134.558716242 podStartE2EDuration="2m14.558716242s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.538464321 +0000 UTC m=+190.780785018" watchObservedRunningTime="2026-02-28 13:19:36.558716242 +0000 UTC m=+190.801036939" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.582158 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=65.582130617 podStartE2EDuration="1m5.582130617s" podCreationTimestamp="2026-02-28 13:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.58054778 +0000 UTC m=+190.822868497" watchObservedRunningTime="2026-02-28 13:19:36.582130617 +0000 UTC m=+190.824451314" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.761887 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=90.761866662 podStartE2EDuration="1m30.761866662s" podCreationTimestamp="2026-02-28 13:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.760408818 +0000 UTC m=+191.002729485" watchObservedRunningTime="2026-02-28 13:19:36.761866662 +0000 UTC m=+191.004187319" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.793510 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.79349184 podStartE2EDuration="1m28.79349184s" podCreationTimestamp="2026-02-28 13:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.780495364 +0000 UTC m=+191.022816041" watchObservedRunningTime="2026-02-28 13:19:36.79349184 +0000 UTC m=+191.035812497" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.829646 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bts94" podStartSLOduration=133.829632373 podStartE2EDuration="2m13.829632373s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.829012755 +0000 UTC m=+191.071333412" watchObservedRunningTime="2026-02-28 13:19:36.829632373 +0000 UTC m=+191.071953030" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.843742 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=63.843732201 podStartE2EDuration="1m3.843732201s" podCreationTimestamp="2026-02-28 13:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.842918567 +0000 UTC m=+191.085239224" watchObservedRunningTime="2026-02-28 13:19:36.843732201 +0000 UTC m=+191.086052858" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.871958 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-k4m7f" podStartSLOduration=134.871944159 podStartE2EDuration="2m14.871944159s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.858584532 +0000 UTC m=+191.100905199" watchObservedRunningTime="2026-02-28 13:19:36.871944159 +0000 UTC m=+191.114264826" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.885539 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8n99q" podStartSLOduration=134.885518942 podStartE2EDuration="2m14.885518942s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.871922978 +0000 UTC m=+191.114243645" watchObservedRunningTime="2026-02-28 13:19:36.885518942 +0000 UTC m=+191.127839609" Feb 28 13:19:36 crc kubenswrapper[4897]: I0228 13:19:36.885929 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kb42x" podStartSLOduration=134.885923284 podStartE2EDuration="2m14.885923284s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:36.884726418 +0000 UTC m=+191.127047095" watchObservedRunningTime="2026-02-28 13:19:36.885923284 +0000 UTC m=+191.128243951" Feb 28 13:19:36 crc kubenswrapper[4897]: E0228 13:19:36.897278 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:38 crc kubenswrapper[4897]: I0228 13:19:38.456171 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:38 crc kubenswrapper[4897]: I0228 13:19:38.456270 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:38 crc kubenswrapper[4897]: I0228 13:19:38.456291 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:38 crc kubenswrapper[4897]: I0228 13:19:38.456194 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:38 crc kubenswrapper[4897]: E0228 13:19:38.456431 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:38 crc kubenswrapper[4897]: E0228 13:19:38.456600 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:38 crc kubenswrapper[4897]: E0228 13:19:38.456768 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:38 crc kubenswrapper[4897]: E0228 13:19:38.456981 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:39 crc kubenswrapper[4897]: I0228 13:19:39.457192 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:19:39 crc kubenswrapper[4897]: E0228 13:19:39.457855 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rjlcm_openshift-ovn-kubernetes(0e63af1c-1b83-44b6-9872-2dfefa37d433)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.456161 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.456185 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.456258 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:40 crc kubenswrapper[4897]: E0228 13:19:40.456511 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.456562 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:40 crc kubenswrapper[4897]: E0228 13:19:40.456767 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:40 crc kubenswrapper[4897]: E0228 13:19:40.456927 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:40 crc kubenswrapper[4897]: E0228 13:19:40.457081 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.660375 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.660442 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.660460 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.660484 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.660501 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T13:19:40Z","lastTransitionTime":"2026-02-28T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.734571 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn"] Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.735007 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.737577 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.737651 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.739170 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.739353 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.860771 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.860857 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.860905 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.860980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.861159 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.926185 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.936285 4897 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.962631 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.962729 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.962764 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.962794 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.962841 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.963580 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.963610 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.964960 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.971486 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:40 crc kubenswrapper[4897]: I0228 13:19:40.993927 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7131bad-4f1f-42f3-b10c-c49c2aa495d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-42lwn\" (UID: \"f7131bad-4f1f-42f3-b10c-c49c2aa495d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:41 crc kubenswrapper[4897]: I0228 13:19:41.058622 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" Feb 28 13:19:41 crc kubenswrapper[4897]: W0228 13:19:41.080833 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7131bad_4f1f_42f3_b10c_c49c2aa495d3.slice/crio-5c1d04b26dc3013ca7435c62900e4b026c7a80ea5833e9c150f2c2a352977360 WatchSource:0}: Error finding container 5c1d04b26dc3013ca7435c62900e4b026c7a80ea5833e9c150f2c2a352977360: Status 404 returned error can't find the container with id 5c1d04b26dc3013ca7435c62900e4b026c7a80ea5833e9c150f2c2a352977360 Feb 28 13:19:41 crc kubenswrapper[4897]: I0228 13:19:41.474591 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" event={"ID":"f7131bad-4f1f-42f3-b10c-c49c2aa495d3","Type":"ContainerStarted","Data":"9efa659d0d61c53f09d24bdfc397601da40829fb08e504f459f125e7eaed2fe0"} Feb 28 13:19:41 crc kubenswrapper[4897]: I0228 13:19:41.474653 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" event={"ID":"f7131bad-4f1f-42f3-b10c-c49c2aa495d3","Type":"ContainerStarted","Data":"5c1d04b26dc3013ca7435c62900e4b026c7a80ea5833e9c150f2c2a352977360"} Feb 28 13:19:41 crc kubenswrapper[4897]: I0228 13:19:41.499533 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-42lwn" podStartSLOduration=139.4995066 podStartE2EDuration="2m19.4995066s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:41.497400198 +0000 UTC m=+195.739720925" watchObservedRunningTime="2026-02-28 13:19:41.4995066 +0000 UTC m=+195.741827297" Feb 28 13:19:41 crc kubenswrapper[4897]: E0228 13:19:41.898947 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:42 crc kubenswrapper[4897]: I0228 13:19:42.456263 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:42 crc kubenswrapper[4897]: I0228 13:19:42.456391 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:42 crc kubenswrapper[4897]: E0228 13:19:42.456432 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:42 crc kubenswrapper[4897]: E0228 13:19:42.456628 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:42 crc kubenswrapper[4897]: I0228 13:19:42.457183 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:42 crc kubenswrapper[4897]: I0228 13:19:42.457413 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:42 crc kubenswrapper[4897]: E0228 13:19:42.457676 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:42 crc kubenswrapper[4897]: E0228 13:19:42.457860 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:44 crc kubenswrapper[4897]: I0228 13:19:44.455913 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:44 crc kubenswrapper[4897]: I0228 13:19:44.455993 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:44 crc kubenswrapper[4897]: E0228 13:19:44.456099 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:44 crc kubenswrapper[4897]: I0228 13:19:44.456146 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:44 crc kubenswrapper[4897]: E0228 13:19:44.456280 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:44 crc kubenswrapper[4897]: E0228 13:19:44.456423 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:44 crc kubenswrapper[4897]: I0228 13:19:44.456609 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:44 crc kubenswrapper[4897]: E0228 13:19:44.456892 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:46 crc kubenswrapper[4897]: I0228 13:19:46.456232 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:46 crc kubenswrapper[4897]: I0228 13:19:46.456462 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:46 crc kubenswrapper[4897]: I0228 13:19:46.456530 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:46 crc kubenswrapper[4897]: I0228 13:19:46.456566 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:46 crc kubenswrapper[4897]: E0228 13:19:46.457780 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:46 crc kubenswrapper[4897]: E0228 13:19:46.457878 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:46 crc kubenswrapper[4897]: E0228 13:19:46.457977 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:46 crc kubenswrapper[4897]: E0228 13:19:46.458050 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:46 crc kubenswrapper[4897]: E0228 13:19:46.900734 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:48 crc kubenswrapper[4897]: I0228 13:19:48.456118 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:48 crc kubenswrapper[4897]: I0228 13:19:48.456248 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:48 crc kubenswrapper[4897]: E0228 13:19:48.456358 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:48 crc kubenswrapper[4897]: I0228 13:19:48.456410 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:48 crc kubenswrapper[4897]: I0228 13:19:48.456433 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:48 crc kubenswrapper[4897]: E0228 13:19:48.456557 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:48 crc kubenswrapper[4897]: E0228 13:19:48.456726 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:48 crc kubenswrapper[4897]: E0228 13:19:48.456902 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.456012 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.456098 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:50 crc kubenswrapper[4897]: E0228 13:19:50.456178 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.456193 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.456235 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:50 crc kubenswrapper[4897]: E0228 13:19:50.456414 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:50 crc kubenswrapper[4897]: E0228 13:19:50.456624 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:50 crc kubenswrapper[4897]: E0228 13:19:50.456791 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.508418 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/1.log" Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.508924 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/0.log" Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.508983 4897 generic.go:334] "Generic (PLEG): container finished" podID="cd164967-b99b-47d0-a691-7d8118fa81ce" containerID="56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5" exitCode=1 Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.509023 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k4m7f" event={"ID":"cd164967-b99b-47d0-a691-7d8118fa81ce","Type":"ContainerDied","Data":"56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5"} Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.509066 4897 scope.go:117] "RemoveContainer" containerID="02d6b2023edab21ebd2873bc7584d2346d725a97a9e21d54cb9cc86e63bec717" Feb 28 13:19:50 crc kubenswrapper[4897]: I0228 13:19:50.509621 4897 scope.go:117] "RemoveContainer" containerID="56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5" Feb 28 13:19:50 crc kubenswrapper[4897]: E0228 13:19:50.509837 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-k4m7f_openshift-multus(cd164967-b99b-47d0-a691-7d8118fa81ce)\"" pod="openshift-multus/multus-k4m7f" podUID="cd164967-b99b-47d0-a691-7d8118fa81ce" Feb 28 13:19:51 crc kubenswrapper[4897]: I0228 13:19:51.515389 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/1.log" Feb 28 13:19:51 crc kubenswrapper[4897]: E0228 13:19:51.902569 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:52 crc kubenswrapper[4897]: I0228 13:19:52.455529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:52 crc kubenswrapper[4897]: I0228 13:19:52.455683 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:52 crc kubenswrapper[4897]: E0228 13:19:52.455809 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:52 crc kubenswrapper[4897]: I0228 13:19:52.456030 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:52 crc kubenswrapper[4897]: I0228 13:19:52.456058 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:52 crc kubenswrapper[4897]: E0228 13:19:52.456114 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:52 crc kubenswrapper[4897]: E0228 13:19:52.456257 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:52 crc kubenswrapper[4897]: E0228 13:19:52.456440 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:54 crc kubenswrapper[4897]: I0228 13:19:54.455717 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:54 crc kubenswrapper[4897]: E0228 13:19:54.455874 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:54 crc kubenswrapper[4897]: I0228 13:19:54.455965 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:54 crc kubenswrapper[4897]: I0228 13:19:54.455978 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:54 crc kubenswrapper[4897]: I0228 13:19:54.456010 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:54 crc kubenswrapper[4897]: E0228 13:19:54.456499 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:54 crc kubenswrapper[4897]: E0228 13:19:54.456565 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:54 crc kubenswrapper[4897]: I0228 13:19:54.456655 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:19:54 crc kubenswrapper[4897]: E0228 13:19:54.456713 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:55 crc kubenswrapper[4897]: I0228 13:19:55.534222 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/3.log" Feb 28 13:19:55 crc kubenswrapper[4897]: I0228 13:19:55.539326 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerStarted","Data":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} Feb 28 13:19:55 crc kubenswrapper[4897]: I0228 13:19:55.539728 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:19:55 crc kubenswrapper[4897]: I0228 13:19:55.556011 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5tms6"] Feb 28 13:19:55 crc kubenswrapper[4897]: I0228 13:19:55.556093 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:55 crc kubenswrapper[4897]: E0228 13:19:55.556180 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:55 crc kubenswrapper[4897]: I0228 13:19:55.589230 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podStartSLOduration=153.589202893 podStartE2EDuration="2m33.589202893s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:19:55.586868606 +0000 UTC m=+209.829189333" watchObservedRunningTime="2026-02-28 13:19:55.589202893 +0000 UTC m=+209.831523560" Feb 28 13:19:56 crc kubenswrapper[4897]: I0228 13:19:56.456235 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:56 crc kubenswrapper[4897]: I0228 13:19:56.456337 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:56 crc kubenswrapper[4897]: I0228 13:19:56.456364 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:56 crc kubenswrapper[4897]: E0228 13:19:56.458110 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:56 crc kubenswrapper[4897]: E0228 13:19:56.458249 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:56 crc kubenswrapper[4897]: E0228 13:19:56.458449 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:56 crc kubenswrapper[4897]: E0228 13:19:56.904269 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:19:57 crc kubenswrapper[4897]: I0228 13:19:57.456002 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:57 crc kubenswrapper[4897]: E0228 13:19:57.456202 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:19:58 crc kubenswrapper[4897]: I0228 13:19:58.455703 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:19:58 crc kubenswrapper[4897]: I0228 13:19:58.455726 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:19:58 crc kubenswrapper[4897]: I0228 13:19:58.456026 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:19:58 crc kubenswrapper[4897]: E0228 13:19:58.456119 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:19:58 crc kubenswrapper[4897]: E0228 13:19:58.455920 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:19:58 crc kubenswrapper[4897]: E0228 13:19:58.456298 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:19:59 crc kubenswrapper[4897]: I0228 13:19:59.455431 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:19:59 crc kubenswrapper[4897]: E0228 13:19:59.455584 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:20:00 crc kubenswrapper[4897]: I0228 13:20:00.456011 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:20:00 crc kubenswrapper[4897]: I0228 13:20:00.456027 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:00 crc kubenswrapper[4897]: E0228 13:20:00.456189 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:20:00 crc kubenswrapper[4897]: E0228 13:20:00.456458 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:20:00 crc kubenswrapper[4897]: I0228 13:20:00.456688 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:00 crc kubenswrapper[4897]: E0228 13:20:00.456776 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:20:01 crc kubenswrapper[4897]: I0228 13:20:01.455804 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:20:01 crc kubenswrapper[4897]: E0228 13:20:01.455936 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:20:01 crc kubenswrapper[4897]: E0228 13:20:01.905827 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:20:02 crc kubenswrapper[4897]: I0228 13:20:02.455565 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:02 crc kubenswrapper[4897]: I0228 13:20:02.455565 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:20:02 crc kubenswrapper[4897]: I0228 13:20:02.455594 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:02 crc kubenswrapper[4897]: E0228 13:20:02.455768 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:20:02 crc kubenswrapper[4897]: E0228 13:20:02.455860 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:20:02 crc kubenswrapper[4897]: E0228 13:20:02.455929 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:20:03 crc kubenswrapper[4897]: I0228 13:20:03.356931 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:20:03 crc kubenswrapper[4897]: I0228 13:20:03.455269 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:20:03 crc kubenswrapper[4897]: E0228 13:20:03.455496 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:20:04 crc kubenswrapper[4897]: I0228 13:20:04.455557 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:20:04 crc kubenswrapper[4897]: I0228 13:20:04.455615 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:04 crc kubenswrapper[4897]: I0228 13:20:04.455974 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:04 crc kubenswrapper[4897]: E0228 13:20:04.456127 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:20:04 crc kubenswrapper[4897]: E0228 13:20:04.456375 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:20:04 crc kubenswrapper[4897]: E0228 13:20:04.456610 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:20:04 crc kubenswrapper[4897]: I0228 13:20:04.457120 4897 scope.go:117] "RemoveContainer" containerID="56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5" Feb 28 13:20:05 crc kubenswrapper[4897]: I0228 13:20:05.455935 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:20:05 crc kubenswrapper[4897]: E0228 13:20:05.456560 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5tms6" podUID="8b95b3e0-28e1-4b26-86a3-bd61c5528b3e" Feb 28 13:20:05 crc kubenswrapper[4897]: I0228 13:20:05.583095 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/1.log" Feb 28 13:20:05 crc kubenswrapper[4897]: I0228 13:20:05.583206 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k4m7f" event={"ID":"cd164967-b99b-47d0-a691-7d8118fa81ce","Type":"ContainerStarted","Data":"3f09bce6157f789ce56ef9ba541b09f9f3f4564b8294d903a1065eaee6b33c56"} Feb 28 13:20:06 crc kubenswrapper[4897]: I0228 13:20:06.455980 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:06 crc kubenswrapper[4897]: I0228 13:20:06.456045 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:06 crc kubenswrapper[4897]: I0228 13:20:06.457474 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:20:06 crc kubenswrapper[4897]: E0228 13:20:06.457980 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 13:20:06 crc kubenswrapper[4897]: E0228 13:20:06.458618 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 13:20:06 crc kubenswrapper[4897]: E0228 13:20:06.458480 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 13:20:07 crc kubenswrapper[4897]: I0228 13:20:07.455231 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:20:07 crc kubenswrapper[4897]: I0228 13:20:07.458650 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 13:20:07 crc kubenswrapper[4897]: I0228 13:20:07.460024 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 13:20:08 crc kubenswrapper[4897]: I0228 13:20:08.456358 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:08 crc kubenswrapper[4897]: I0228 13:20:08.456460 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:20:08 crc kubenswrapper[4897]: I0228 13:20:08.457883 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:08 crc kubenswrapper[4897]: I0228 13:20:08.459746 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 13:20:08 crc kubenswrapper[4897]: I0228 13:20:08.460748 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 13:20:08 crc kubenswrapper[4897]: I0228 13:20:08.460976 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 13:20:08 crc kubenswrapper[4897]: I0228 13:20:08.461102 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.894924 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.895102 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:10 crc kubenswrapper[4897]: E0228 13:20:10.895169 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:22:12.895131442 +0000 UTC m=+347.137452129 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.895236 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.895355 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.899435 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.908296 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.909048 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.997074 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:10 crc kubenswrapper[4897]: I0228 13:20:10.997150 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.002542 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.002966 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b95b3e0-28e1-4b26-86a3-bd61c5528b3e-metrics-certs\") pod \"network-metrics-daemon-5tms6\" (UID: \"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e\") " pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.078119 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5tms6" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.178763 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.196031 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.205758 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.351520 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.377875 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5tms6"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.396204 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-95h9j"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.405999 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp"] Feb 28 13:20:11 crc kubenswrapper[4897]: W0228 13:20:11.397589 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b95b3e0_28e1_4b26_86a3_bd61c5528b3e.slice/crio-d14ed2ecbfe0c77af08fd33c87cd7cbaf56de7ac901814f541c3d5093755fab4 WatchSource:0}: Error finding container d14ed2ecbfe0c77af08fd33c87cd7cbaf56de7ac901814f541c3d5093755fab4: Status 404 returned error can't find the container with id d14ed2ecbfe0c77af08fd33c87cd7cbaf56de7ac901814f541c3d5093755fab4 Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.406294 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.407465 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gzgt9"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.407640 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq58q"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.407996 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.408269 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.408561 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.408741 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.409252 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.410059 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.410082 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.410204 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.410437 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.411715 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.412129 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fv2rz"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.412465 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rd9tl"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.412799 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.413157 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.413420 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fv2rz" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.414194 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.414529 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.414584 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.414689 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.414830 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.414897 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.414928 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.416213 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.416414 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.422298 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.427764 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.428705 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.429391 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.440415 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.440937 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.442211 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.443822 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.460536 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.460773 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.460977 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.461186 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.461364 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.461516 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.462264 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.463456 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.476617 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.476873 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.477111 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.478798 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.479264 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.479483 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.479569 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.479586 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.480047 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.480426 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.480460 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.480630 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.480796 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.480926 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l7m8v"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.481443 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.481382 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.486484 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.487076 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.488506 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.488625 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.488666 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.488790 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.488997 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.489372 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.489490 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.489603 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.489719 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.489818 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.489895 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.489974 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.490272 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.490771 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.494526 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.495191 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nsbjk"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.495722 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.495908 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.501695 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.502640 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.508236 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.509236 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.510183 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.510755 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.510868 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511045 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511166 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511565 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mfx26"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511558 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c66153dc-f2e3-4798-876c-da6826dea18c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511708 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511729 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c66153dc-f2e3-4798-876c-da6826dea18c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511747 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-serving-cert\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511764 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-proxy-tls\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511781 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-trusted-ca-bundle\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511795 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c66153dc-f2e3-4798-876c-da6826dea18c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511846 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-config\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511867 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34853f18-08de-4bb6-8fc1-9ae1d51b314a-config\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511883 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511900 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt6l6\" (UniqueName: \"kubernetes.io/projected/52ee48cc-65ac-4228-821c-f9c70d249ebf-kube-api-access-nt6l6\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511916 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-console-config\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511931 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntszw\" (UniqueName: \"kubernetes.io/projected/34853f18-08de-4bb6-8fc1-9ae1d51b314a-kube-api-access-ntszw\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511947 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511964 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-audit-policies\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26da04fb-0109-4f7f-a283-f489e9b4596f-audit-dir\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511995 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-service-ca\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512004 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25vlq"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512012 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlg6\" (UniqueName: \"kubernetes.io/projected/536efe5c-a55e-48a2-920e-cdb34a2bce57-kube-api-access-7jlg6\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512013 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512052 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512059 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4km6\" (UniqueName: \"kubernetes.io/projected/3423cf07-c57b-41f3-82da-f497649699db-kube-api-access-t4km6\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512129 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/536efe5c-a55e-48a2-920e-cdb34a2bce57-machine-approver-tls\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511576 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512180 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.511623 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512220 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-etcd-client\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512247 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjk5p\" (UniqueName: \"kubernetes.io/projected/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-kube-api-access-rjk5p\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512288 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ee48cc-65ac-4228-821c-f9c70d249ebf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512410 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512457 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/536efe5c-a55e-48a2-920e-cdb34a2bce57-auth-proxy-config\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512482 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-client-ca\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512497 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-config\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512513 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldxp\" (UniqueName: \"kubernetes.io/projected/61f10600-21dd-4043-af69-aa0fdfd246f7-kube-api-access-sldxp\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512527 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512529 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-serving-cert\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512858 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61f10600-21dd-4043-af69-aa0fdfd246f7-serving-cert\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512884 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34853f18-08de-4bb6-8fc1-9ae1d51b314a-serving-cert\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512908 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-serving-cert\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512927 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-oauth-serving-cert\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512953 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-service-ca-bundle\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512969 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-images\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.512987 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-encryption-config\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.513005 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzdr2\" (UniqueName: \"kubernetes.io/projected/4f2510cb-e89f-49a0-b5cd-aca1a5c51178-kube-api-access-nzdr2\") pod \"downloads-7954f5f757-fv2rz\" (UID: \"4f2510cb-e89f-49a0-b5cd-aca1a5c51178\") " pod="openshift-console/downloads-7954f5f757-fv2rz" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.513023 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-oauth-config\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.513063 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9n2z\" (UniqueName: \"kubernetes.io/projected/c66153dc-f2e3-4798-876c-da6826dea18c-kube-api-access-r9n2z\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.513090 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5qlz\" (UniqueName: \"kubernetes.io/projected/26da04fb-0109-4f7f-a283-f489e9b4596f-kube-api-access-k5qlz\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.513125 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52ee48cc-65ac-4228-821c-f9c70d249ebf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.513147 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7mkr\" (UniqueName: \"kubernetes.io/projected/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-kube-api-access-g7mkr\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.513171 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536efe5c-a55e-48a2-920e-cdb34a2bce57-config\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.513188 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34853f18-08de-4bb6-8fc1-9ae1d51b314a-trusted-ca\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.514138 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.514344 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.514433 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.514507 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.515132 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.515293 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.515450 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.515557 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.515654 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.515796 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.516436 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.518272 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538078-hj8mj"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.518808 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.519037 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.519537 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.521511 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.522503 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.522933 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.525556 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.527102 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.527403 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.527789 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.528053 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.528708 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.529003 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.529568 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.529946 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.530054 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.531428 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.531988 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.533383 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.533500 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.534141 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.534274 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.534779 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.547156 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.547522 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.548738 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.548856 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.548971 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.549711 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.552133 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.552438 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.552543 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-84hkx"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.553078 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.554221 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.555208 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.558255 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.558404 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.558459 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.559806 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7wzmt"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.560276 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.560694 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5fwp4"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.561236 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.567006 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.567004 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.567750 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.567801 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.568884 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.570911 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t87x8"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.570968 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.571551 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.572133 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.572265 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.572630 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538080-qcrrw"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.572706 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.572789 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.573279 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.573726 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.573757 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.573965 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.574144 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.575064 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zkvs9"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.578863 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.579163 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rr6bc"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.579553 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.580479 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: W0228 13:20:11.590799 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-9c0f48b91bfe49d45c006dd19116accc5341bbde0e83cc8ce3b7f5d5c34465d6 WatchSource:0}: Error finding container 9c0f48b91bfe49d45c006dd19116accc5341bbde0e83cc8ce3b7f5d5c34465d6: Status 404 returned error can't find the container with id 9c0f48b91bfe49d45c006dd19116accc5341bbde0e83cc8ce3b7f5d5c34465d6 Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.598190 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-95h9j"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.598231 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.600027 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.601611 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq58q"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.601633 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-td8r5"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.602125 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.602144 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gzgt9"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.602156 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.602165 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fv2rz"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.602176 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k72ms"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.603569 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.603717 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.603825 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.603899 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.604529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.607706 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.608725 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9c0f48b91bfe49d45c006dd19116accc5341bbde0e83cc8ce3b7f5d5c34465d6"} Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.610706 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5tms6" event={"ID":"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e","Type":"ContainerStarted","Data":"d14ed2ecbfe0c77af08fd33c87cd7cbaf56de7ac901814f541c3d5093755fab4"} Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.611384 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4fff3e3185a69ca2e668390e92ddd6bbad5498d5732b077dfa61bfe17b70969b"} Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.613847 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614059 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-service-ca\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614095 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2255cdfc-6996-4567-ba4d-b1b609f1264c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614116 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2255cdfc-6996-4567-ba4d-b1b609f1264c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614137 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlg6\" (UniqueName: \"kubernetes.io/projected/536efe5c-a55e-48a2-920e-cdb34a2bce57-kube-api-access-7jlg6\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614160 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4km6\" (UniqueName: \"kubernetes.io/projected/3423cf07-c57b-41f3-82da-f497649699db-kube-api-access-t4km6\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614529 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-metrics-certs\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614596 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbg5d\" (UniqueName: \"kubernetes.io/projected/c97c55d8-5260-43bc-aaf7-e217a748b83f-kube-api-access-cbg5d\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.614630 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab51603-514f-4ade-8bf2-6281d27a579f-serving-cert\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615101 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ee48cc-65ac-4228-821c-f9c70d249ebf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615207 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a147d2f-de25-4ba1-8858-392c56b60a20-audit-dir\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615249 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xhgd\" (UniqueName: \"kubernetes.io/projected/32420c77-c3bf-489a-b622-a912ea4c983c-kube-api-access-5xhgd\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615271 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615292 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615358 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-service-ca\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615400 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8p57\" (UniqueName: \"kubernetes.io/projected/342631a0-9c4d-4e4f-9743-4d13ea740a55-kube-api-access-z8p57\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615428 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4lgd\" (UniqueName: \"kubernetes.io/projected/a81ddd0f-39bc-4645-94b0-38869e4afba3-kube-api-access-q4lgd\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615455 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615574 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ee48cc-65ac-4228-821c-f9c70d249ebf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-audit\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615657 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49308413-0bd0-4aef-8d1b-451b077e6996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-glzrp\" (UID: \"49308413-0bd0-4aef-8d1b-451b077e6996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615679 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntptc\" (UniqueName: \"kubernetes.io/projected/79743a51-c0b2-45b2-99d3-385e0b2f2c6f-kube-api-access-ntptc\") pod \"auto-csr-approver-29538078-hj8mj\" (UID: \"79743a51-c0b2-45b2-99d3-385e0b2f2c6f\") " pod="openshift-infra/auto-csr-approver-29538078-hj8mj" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615708 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/536efe5c-a55e-48a2-920e-cdb34a2bce57-auth-proxy-config\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615727 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32420c77-c3bf-489a-b622-a912ea4c983c-profile-collector-cert\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615750 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-config\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615769 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-serving-cert\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615788 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvcsj\" (UniqueName: \"kubernetes.io/projected/4ba075a8-61d1-4147-80ea-03906930ff87-kube-api-access-tvcsj\") pod \"package-server-manager-789f6589d5-vk9dl\" (UID: \"4ba075a8-61d1-4147-80ea-03906930ff87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615807 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xph\" (UniqueName: \"kubernetes.io/projected/a52c7385-4178-4038-93b0-5cd758958e80-kube-api-access-n9xph\") pod \"auto-csr-approver-29538080-qcrrw\" (UID: \"a52c7385-4178-4038-93b0-5cd758958e80\") " pod="openshift-infra/auto-csr-approver-29538080-qcrrw" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615829 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61f10600-21dd-4043-af69-aa0fdfd246f7-serving-cert\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615852 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-serving-cert\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615872 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-service-ca-bundle\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615892 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-proxy-tls\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615909 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615928 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-oauth-config\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615947 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-stats-auth\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615965 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ba075a8-61d1-4147-80ea-03906930ff87-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vk9dl\" (UID: \"4ba075a8-61d1-4147-80ea-03906930ff87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.615984 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-service-ca\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616007 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drq49\" (UniqueName: \"kubernetes.io/projected/260888c5-6c67-47a8-9903-a25a8a1c6b7d-kube-api-access-drq49\") pod \"dns-operator-744455d44c-nsbjk\" (UID: \"260888c5-6c67-47a8-9903-a25a8a1c6b7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616025 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mggrx\" (UniqueName: \"kubernetes.io/projected/49308413-0bd0-4aef-8d1b-451b077e6996-kube-api-access-mggrx\") pod \"control-plane-machine-set-operator-78cbb6b69f-glzrp\" (UID: \"49308413-0bd0-4aef-8d1b-451b077e6996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616047 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2319dd-b85c-4542-bf25-8233ecda9d78-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616067 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536efe5c-a55e-48a2-920e-cdb34a2bce57-config\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616085 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22vp2\" (UniqueName: \"kubernetes.io/projected/4ab51603-514f-4ade-8bf2-6281d27a579f-kube-api-access-22vp2\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616105 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-config-volume\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616129 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34853f18-08de-4bb6-8fc1-9ae1d51b314a-trusted-ca\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616153 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-policies\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616171 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616193 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c66153dc-f2e3-4798-876c-da6826dea18c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616211 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32420c77-c3bf-489a-b622-a912ea4c983c-srv-cert\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616230 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c66153dc-f2e3-4798-876c-da6826dea18c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616249 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-proxy-tls\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616471 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616501 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616530 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616549 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616570 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-metrics-tls\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616622 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d1d3880-3411-45f5-8835-a4db59c38cfe-webhook-cert\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616645 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616664 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616686 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34853f18-08de-4bb6-8fc1-9ae1d51b314a-config\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616709 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616871 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/536efe5c-a55e-48a2-920e-cdb34a2bce57-auth-proxy-config\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616931 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmhmt\" (UniqueName: \"kubernetes.io/projected/13ffe813-5e11-41e0-9426-8771f8b2ce0b-kube-api-access-bmhmt\") pod \"migrator-59844c95c7-bxsf4\" (UID: \"13ffe813-5e11-41e0-9426-8771f8b2ce0b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616960 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.616980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6l6f\" (UniqueName: \"kubernetes.io/projected/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-kube-api-access-h6l6f\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617005 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w28d\" (UniqueName: \"kubernetes.io/projected/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-kube-api-access-5w28d\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617028 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt6l6\" (UniqueName: \"kubernetes.io/projected/52ee48cc-65ac-4228-821c-f9c70d249ebf-kube-api-access-nt6l6\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617050 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617224 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617529 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617069 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ac747027-4a87-46fc-87e2-fca7e049f863-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617584 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmgdw\" (UniqueName: \"kubernetes.io/projected/9a147d2f-de25-4ba1-8858-392c56b60a20-kube-api-access-zmgdw\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617607 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjb46\" (UniqueName: \"kubernetes.io/projected/df2319dd-b85c-4542-bf25-8233ecda9d78-kube-api-access-zjb46\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617631 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26da04fb-0109-4f7f-a283-f489e9b4596f-audit-dir\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617650 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10286867-4aba-45e5-a1f3-40494acb8cde-config\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617670 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2228\" (UniqueName: \"kubernetes.io/projected/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-kube-api-access-r2228\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617691 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10286867-4aba-45e5-a1f3-40494acb8cde-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617744 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26da04fb-0109-4f7f-a283-f489e9b4596f-audit-dir\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617782 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-config\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617811 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrwqb\" (UniqueName: \"kubernetes.io/projected/3d1d3880-3411-45f5-8835-a4db59c38cfe-kube-api-access-wrwqb\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617832 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617859 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-default-certificate\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617920 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce549223-07dd-40b8-b988-7a49ed1a94e5-config\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617940 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-secret-volume\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617959 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d1d3880-3411-45f5-8835-a4db59c38cfe-tmpfs\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617980 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/536efe5c-a55e-48a2-920e-cdb34a2bce57-machine-approver-tls\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.617997 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.618017 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-etcd-client\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.619179 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-config\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.620478 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-service-ca-bundle\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.621096 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536efe5c-a55e-48a2-920e-cdb34a2bce57-config\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.621951 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34853f18-08de-4bb6-8fc1-9ae1d51b314a-trusted-ca\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622541 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622596 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjk5p\" (UniqueName: \"kubernetes.io/projected/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-kube-api-access-rjk5p\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622624 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/260888c5-6c67-47a8-9903-a25a8a1c6b7d-metrics-tls\") pod \"dns-operator-744455d44c-nsbjk\" (UID: \"260888c5-6c67-47a8-9903-a25a8a1c6b7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622643 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f58g\" (UniqueName: \"kubernetes.io/projected/53e254f6-444a-4fd6-8bda-5af18b9d347c-kube-api-access-6f58g\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622663 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-etcd-serving-ca\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622682 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d1d3880-3411-45f5-8835-a4db59c38cfe-apiservice-cert\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622703 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ab51603-514f-4ade-8bf2-6281d27a579f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622720 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-config\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622737 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce549223-07dd-40b8-b988-7a49ed1a94e5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622757 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce549223-07dd-40b8-b988-7a49ed1a94e5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622776 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-trusted-ca\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622796 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-client-ca\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sldxp\" (UniqueName: \"kubernetes.io/projected/61f10600-21dd-4043-af69-aa0fdfd246f7-kube-api-access-sldxp\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622833 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c97c55d8-5260-43bc-aaf7-e217a748b83f-signing-key\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622853 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c97c55d8-5260-43bc-aaf7-e217a748b83f-signing-cabundle\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622873 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34853f18-08de-4bb6-8fc1-9ae1d51b314a-serving-cert\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622894 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622910 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-ca\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622931 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-oauth-serving-cert\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622950 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-images\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622970 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81ddd0f-39bc-4645-94b0-38869e4afba3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622987 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81ddd0f-39bc-4645-94b0-38869e4afba3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.622997 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c66153dc-f2e3-4798-876c-da6826dea18c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623009 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-encryption-config\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623045 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9a147d2f-de25-4ba1-8858-392c56b60a20-node-pullsecrets\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623068 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ac747027-4a87-46fc-87e2-fca7e049f863-srv-cert\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623095 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzdr2\" (UniqueName: \"kubernetes.io/projected/4f2510cb-e89f-49a0-b5cd-aca1a5c51178-kube-api-access-nzdr2\") pod \"downloads-7954f5f757-fv2rz\" (UID: \"4f2510cb-e89f-49a0-b5cd-aca1a5c51178\") " pod="openshift-console/downloads-7954f5f757-fv2rz" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623118 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623135 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktz7d\" (UniqueName: \"kubernetes.io/projected/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-kube-api-access-ktz7d\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623153 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-dir\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623173 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrr8r\" (UniqueName: \"kubernetes.io/projected/9a9c0df9-c002-43ec-bc67-dee3c0862056-kube-api-access-lrr8r\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623193 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9n2z\" (UniqueName: \"kubernetes.io/projected/c66153dc-f2e3-4798-876c-da6826dea18c-kube-api-access-r9n2z\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623213 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5qlz\" (UniqueName: \"kubernetes.io/projected/26da04fb-0109-4f7f-a283-f489e9b4596f-kube-api-access-k5qlz\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623229 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-config\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623250 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52ee48cc-65ac-4228-821c-f9c70d249ebf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623270 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7mkr\" (UniqueName: \"kubernetes.io/projected/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-kube-api-access-g7mkr\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623293 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-images\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623322 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-client\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623344 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623366 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2255cdfc-6996-4567-ba4d-b1b609f1264c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.623386 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9c0df9-c002-43ec-bc67-dee3c0862056-serving-cert\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.624651 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-oauth-serving-cert\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-etcd-client\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625269 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625326 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625357 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-serving-cert\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625384 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqnk\" (UniqueName: \"kubernetes.io/projected/49d0a669-bb05-4da5-9e58-789b58c0797b-kube-api-access-gmqnk\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625403 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10286867-4aba-45e5-a1f3-40494acb8cde-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625427 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-trusted-ca-bundle\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625451 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bqs\" (UniqueName: \"kubernetes.io/projected/ac747027-4a87-46fc-87e2-fca7e049f863-kube-api-access-r8bqs\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625470 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c66153dc-f2e3-4798-876c-da6826dea18c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625489 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-config\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625510 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-image-import-ca\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.625529 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/342631a0-9c4d-4e4f-9743-4d13ea740a55-service-ca-bundle\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.626679 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-config\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.627060 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.627176 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34853f18-08de-4bb6-8fc1-9ae1d51b314a-serving-cert\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.627860 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52ee48cc-65ac-4228-821c-f9c70d249ebf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.627916 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-etcd-client\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.627955 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-serving-cert\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.627958 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-encryption-config\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.628343 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-serving-cert\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.628572 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-client-ca\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.628817 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-serving-cert\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.628965 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-oauth-config\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629425 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34853f18-08de-4bb6-8fc1-9ae1d51b314a-config\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629450 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629513 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629618 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c66153dc-f2e3-4798-876c-da6826dea18c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629731 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-proxy-tls\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-images\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629908 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629932 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-console-config\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629952 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntszw\" (UniqueName: \"kubernetes.io/projected/34853f18-08de-4bb6-8fc1-9ae1d51b314a-kube-api-access-ntszw\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.629956 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-trusted-ca-bundle\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.630007 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-audit-policies\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.630033 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-serving-cert\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.630579 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-console-config\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.630814 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26da04fb-0109-4f7f-a283-f489e9b4596f-encryption-config\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.630982 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26da04fb-0109-4f7f-a283-f489e9b4596f-audit-policies\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:11 crc kubenswrapper[4897]: W0228 13:20:11.633628 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-944b78d589f57d81a10cdc51f694f2ed743041fce1396e3c1aaa05929813a401 WatchSource:0}: Error finding container 944b78d589f57d81a10cdc51f694f2ed743041fce1396e3c1aaa05929813a401: Status 404 returned error can't find the container with id 944b78d589f57d81a10cdc51f694f2ed743041fce1396e3c1aaa05929813a401 Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.634336 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.634348 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61f10600-21dd-4043-af69-aa0fdfd246f7-serving-cert\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.634873 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/536efe5c-a55e-48a2-920e-cdb34a2bce57-machine-approver-tls\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.636783 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.646556 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6l67l"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.647436 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.657415 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.660975 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l7m8v"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.663063 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.666883 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.668471 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538080-qcrrw"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.668637 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.672459 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.672576 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.673704 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.675416 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538078-hj8mj"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.677272 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-84hkx"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.680078 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mfx26"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.680853 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.683402 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.683435 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.684420 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rd9tl"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.684973 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nsbjk"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.685990 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25vlq"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.686975 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.688283 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.688545 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.689749 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.690856 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.691981 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.693101 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.695228 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t87x8"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.696467 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-td8r5"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.697280 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6l67l"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.698261 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.699174 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5xn24"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.699877 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5xn24" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.700185 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kkdfw"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.700987 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.701178 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.703176 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k72ms"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.704168 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7wzmt"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.705175 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rr6bc"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.706182 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.707174 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5xn24"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.708042 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.709199 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zkvs9"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.711380 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q9cdm"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.712283 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.712384 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q9cdm"] Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.728468 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.730838 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.730868 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.730891 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-serving-cert\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.730917 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2255cdfc-6996-4567-ba4d-b1b609f1264c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.730933 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2255cdfc-6996-4567-ba4d-b1b609f1264c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.730954 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.730975 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-metrics-certs\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.730990 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab51603-514f-4ade-8bf2-6281d27a579f-serving-cert\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731006 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbg5d\" (UniqueName: \"kubernetes.io/projected/c97c55d8-5260-43bc-aaf7-e217a748b83f-kube-api-access-cbg5d\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731024 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a147d2f-de25-4ba1-8858-392c56b60a20-audit-dir\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731043 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xhgd\" (UniqueName: \"kubernetes.io/projected/32420c77-c3bf-489a-b622-a912ea4c983c-kube-api-access-5xhgd\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731061 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-audit\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731078 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731094 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731111 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8p57\" (UniqueName: \"kubernetes.io/projected/342631a0-9c4d-4e4f-9743-4d13ea740a55-kube-api-access-z8p57\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731128 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4lgd\" (UniqueName: \"kubernetes.io/projected/a81ddd0f-39bc-4645-94b0-38869e4afba3-kube-api-access-q4lgd\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731146 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49308413-0bd0-4aef-8d1b-451b077e6996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-glzrp\" (UID: \"49308413-0bd0-4aef-8d1b-451b077e6996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731165 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntptc\" (UniqueName: \"kubernetes.io/projected/79743a51-c0b2-45b2-99d3-385e0b2f2c6f-kube-api-access-ntptc\") pod \"auto-csr-approver-29538078-hj8mj\" (UID: \"79743a51-c0b2-45b2-99d3-385e0b2f2c6f\") " pod="openshift-infra/auto-csr-approver-29538078-hj8mj" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731183 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32420c77-c3bf-489a-b622-a912ea4c983c-profile-collector-cert\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731201 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvcsj\" (UniqueName: \"kubernetes.io/projected/4ba075a8-61d1-4147-80ea-03906930ff87-kube-api-access-tvcsj\") pod \"package-server-manager-789f6589d5-vk9dl\" (UID: \"4ba075a8-61d1-4147-80ea-03906930ff87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731218 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xph\" (UniqueName: \"kubernetes.io/projected/a52c7385-4178-4038-93b0-5cd758958e80-kube-api-access-n9xph\") pod \"auto-csr-approver-29538080-qcrrw\" (UID: \"a52c7385-4178-4038-93b0-5cd758958e80\") " pod="openshift-infra/auto-csr-approver-29538080-qcrrw" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731246 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-proxy-tls\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731263 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731285 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-stats-auth\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731301 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ba075a8-61d1-4147-80ea-03906930ff87-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vk9dl\" (UID: \"4ba075a8-61d1-4147-80ea-03906930ff87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731333 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-service-ca\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731348 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drq49\" (UniqueName: \"kubernetes.io/projected/260888c5-6c67-47a8-9903-a25a8a1c6b7d-kube-api-access-drq49\") pod \"dns-operator-744455d44c-nsbjk\" (UID: \"260888c5-6c67-47a8-9903-a25a8a1c6b7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731363 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mggrx\" (UniqueName: \"kubernetes.io/projected/49308413-0bd0-4aef-8d1b-451b077e6996-kube-api-access-mggrx\") pod \"control-plane-machine-set-operator-78cbb6b69f-glzrp\" (UID: \"49308413-0bd0-4aef-8d1b-451b077e6996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731379 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2319dd-b85c-4542-bf25-8233ecda9d78-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731399 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22vp2\" (UniqueName: \"kubernetes.io/projected/4ab51603-514f-4ade-8bf2-6281d27a579f-kube-api-access-22vp2\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731413 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-config-volume\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731429 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-policies\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731445 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731462 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32420c77-c3bf-489a-b622-a912ea4c983c-srv-cert\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731477 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731494 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731509 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-metrics-tls\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731523 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731539 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d1d3880-3411-45f5-8835-a4db59c38cfe-webhook-cert\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731557 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731594 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731611 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhmt\" (UniqueName: \"kubernetes.io/projected/13ffe813-5e11-41e0-9426-8771f8b2ce0b-kube-api-access-bmhmt\") pod \"migrator-59844c95c7-bxsf4\" (UID: \"13ffe813-5e11-41e0-9426-8771f8b2ce0b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731629 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6l6f\" (UniqueName: \"kubernetes.io/projected/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-kube-api-access-h6l6f\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731662 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w28d\" (UniqueName: \"kubernetes.io/projected/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-kube-api-access-5w28d\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731686 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ac747027-4a87-46fc-87e2-fca7e049f863-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731704 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmgdw\" (UniqueName: \"kubernetes.io/projected/9a147d2f-de25-4ba1-8858-392c56b60a20-kube-api-access-zmgdw\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731721 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjb46\" (UniqueName: \"kubernetes.io/projected/df2319dd-b85c-4542-bf25-8233ecda9d78-kube-api-access-zjb46\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731738 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10286867-4aba-45e5-a1f3-40494acb8cde-config\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731754 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2228\" (UniqueName: \"kubernetes.io/projected/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-kube-api-access-r2228\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731772 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10286867-4aba-45e5-a1f3-40494acb8cde-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731788 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-config\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731803 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrwqb\" (UniqueName: \"kubernetes.io/projected/3d1d3880-3411-45f5-8835-a4db59c38cfe-kube-api-access-wrwqb\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731817 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731833 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-default-certificate\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731849 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce549223-07dd-40b8-b988-7a49ed1a94e5-config\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731863 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-secret-volume\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731883 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d1d3880-3411-45f5-8835-a4db59c38cfe-tmpfs\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731900 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/260888c5-6c67-47a8-9903-a25a8a1c6b7d-metrics-tls\") pod \"dns-operator-744455d44c-nsbjk\" (UID: \"260888c5-6c67-47a8-9903-a25a8a1c6b7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.731965 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f58g\" (UniqueName: \"kubernetes.io/projected/53e254f6-444a-4fd6-8bda-5af18b9d347c-kube-api-access-6f58g\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732029 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-etcd-serving-ca\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732066 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d1d3880-3411-45f5-8835-a4db59c38cfe-apiservice-cert\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732099 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ab51603-514f-4ade-8bf2-6281d27a579f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732136 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce549223-07dd-40b8-b988-7a49ed1a94e5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732170 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce549223-07dd-40b8-b988-7a49ed1a94e5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732216 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-config\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732294 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-trusted-ca\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732386 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c97c55d8-5260-43bc-aaf7-e217a748b83f-signing-key\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732428 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c97c55d8-5260-43bc-aaf7-e217a748b83f-signing-cabundle\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732475 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732547 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-ca\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732593 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9a147d2f-de25-4ba1-8858-392c56b60a20-node-pullsecrets\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732713 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ac747027-4a87-46fc-87e2-fca7e049f863-srv-cert\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732766 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81ddd0f-39bc-4645-94b0-38869e4afba3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81ddd0f-39bc-4645-94b0-38869e4afba3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732875 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732939 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktz7d\" (UniqueName: \"kubernetes.io/projected/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-kube-api-access-ktz7d\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.732997 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-dir\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.733054 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrr8r\" (UniqueName: \"kubernetes.io/projected/9a9c0df9-c002-43ec-bc67-dee3c0862056-kube-api-access-lrr8r\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.733148 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-config\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.733224 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-images\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.733286 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.734159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a147d2f-de25-4ba1-8858-392c56b60a20-audit-dir\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.734895 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/260888c5-6c67-47a8-9903-a25a8a1c6b7d-metrics-tls\") pod \"dns-operator-744455d44c-nsbjk\" (UID: \"260888c5-6c67-47a8-9903-a25a8a1c6b7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.735475 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-proxy-tls\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.733721 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-client\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.736528 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.736632 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2255cdfc-6996-4567-ba4d-b1b609f1264c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.737354 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.737531 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.738219 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-config\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.738389 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.739950 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-trusted-ca\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.740181 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.740467 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ac747027-4a87-46fc-87e2-fca7e049f863-srv-cert\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.740916 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab51603-514f-4ade-8bf2-6281d27a579f-serving-cert\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.740982 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ba075a8-61d1-4147-80ea-03906930ff87-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vk9dl\" (UID: \"4ba075a8-61d1-4147-80ea-03906930ff87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.741035 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.741533 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d1d3880-3411-45f5-8835-a4db59c38cfe-tmpfs\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.741983 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-ca\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.742100 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9a147d2f-de25-4ba1-8858-392c56b60a20-node-pullsecrets\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.742280 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ab51603-514f-4ade-8bf2-6281d27a579f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.742603 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49308413-0bd0-4aef-8d1b-451b077e6996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-glzrp\" (UID: \"49308413-0bd0-4aef-8d1b-451b077e6996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.743663 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d1d3880-3411-45f5-8835-a4db59c38cfe-apiservice-cert\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.744412 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/32420c77-c3bf-489a-b622-a912ea4c983c-profile-collector-cert\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.744539 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/32420c77-c3bf-489a-b622-a912ea4c983c-srv-cert\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.744706 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-dir\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.745118 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ac747027-4a87-46fc-87e2-fca7e049f863-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.745670 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-policies\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.746903 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.747200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-secret-volume\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.747275 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.747450 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.748411 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.748606 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.748792 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d1d3880-3411-45f5-8835-a4db59c38cfe-webhook-cert\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.748794 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9c0df9-c002-43ec-bc67-dee3c0862056-serving-cert\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.749010 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqnk\" (UniqueName: \"kubernetes.io/projected/49d0a669-bb05-4da5-9e58-789b58c0797b-kube-api-access-gmqnk\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.749078 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10286867-4aba-45e5-a1f3-40494acb8cde-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.749381 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-service-ca\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.749514 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-image-import-ca\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.749626 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/342631a0-9c4d-4e4f-9743-4d13ea740a55-service-ca-bundle\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.749650 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-metrics-tls\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.749728 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.749815 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8bqs\" (UniqueName: \"kubernetes.io/projected/ac747027-4a87-46fc-87e2-fca7e049f863-kube-api-access-r8bqs\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.750084 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-etcd-client\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.750195 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-encryption-config\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.750291 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.750411 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.751833 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a9c0df9-c002-43ec-bc67-dee3c0862056-etcd-client\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.751950 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9c0df9-c002-43ec-bc67-dee3c0862056-serving-cert\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.753294 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.757043 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.760973 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.769670 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.807990 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.829704 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.836848 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c97c55d8-5260-43bc-aaf7-e217a748b83f-signing-key\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.847918 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.868750 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.869553 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c97c55d8-5260-43bc-aaf7-e217a748b83f-signing-cabundle\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.888426 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.909267 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.921939 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-default-certificate\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.929053 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.941044 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-metrics-certs\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.949240 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.969235 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.979655 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/342631a0-9c4d-4e4f-9743-4d13ea740a55-stats-auth\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.989045 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 28 13:20:11 crc kubenswrapper[4897]: I0228 13:20:11.991574 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/342631a0-9c4d-4e4f-9743-4d13ea740a55-service-ca-bundle\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.008499 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.029103 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.049254 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.069632 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.089151 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.102805 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce549223-07dd-40b8-b988-7a49ed1a94e5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.113635 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.118027 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce549223-07dd-40b8-b988-7a49ed1a94e5-config\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.128206 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.147885 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.168148 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.189049 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.209010 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.219570 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-serving-cert\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.230161 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.234898 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-encryption-config\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.249893 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.268735 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.274333 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a147d2f-de25-4ba1-8858-392c56b60a20-etcd-client\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.289560 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.296007 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-config\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.308644 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.315371 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-audit\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.328811 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.335933 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-etcd-serving-ca\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.347955 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.350803 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-image-import-ca\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.374989 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.376789 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a147d2f-de25-4ba1-8858-392c56b60a20-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.388936 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.410393 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.428775 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.432601 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81ddd0f-39bc-4645-94b0-38869e4afba3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.449003 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.468584 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.477367 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-config-volume\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.489221 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.509071 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.529110 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.548920 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.568653 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.574261 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10286867-4aba-45e5-a1f3-40494acb8cde-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.587612 4897 request.go:700] Waited for 1.013385699s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0 Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.589179 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.592736 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10286867-4aba-45e5-a1f3-40494acb8cde-config\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.609776 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.616643 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81ddd0f-39bc-4645-94b0-38869e4afba3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.617211 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4ba2dec244bce99c2dd26d56b66a6069c025008828ac19448afc60a6fe4712ef"} Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.617360 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.619246 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a9252dbff25041673e0b0229cbe57fdc96ef7c2690a098c90afcb9fbdb3f1abe"} Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.619363 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"944b78d589f57d81a10cdc51f694f2ed743041fce1396e3c1aaa05929813a401"} Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.621784 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5tms6" event={"ID":"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e","Type":"ContainerStarted","Data":"3b561fa3b5be0e39a93334c8d97d2303c0f27ef2fbf728786a379013f1ab4471"} Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.621840 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5tms6" event={"ID":"8b95b3e0-28e1-4b26-86a3-bd61c5528b3e","Type":"ContainerStarted","Data":"a57f79358a855e23d67f41adeab758208602230d6c40ecaf07798de1900e2459"} Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.623714 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fe7a0c944ef9ad6ad4cf623bc40509eea93959cf49dfa5bcbdaa91a1db95f019"} Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.629734 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.649681 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.659635 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2255cdfc-6996-4567-ba4d-b1b609f1264c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.669490 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.689508 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.698113 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2255cdfc-6996-4567-ba4d-b1b609f1264c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.709222 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.718278 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-images\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.729051 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 28 13:20:12 crc kubenswrapper[4897]: E0228 13:20:12.731597 4897 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 28 13:20:12 crc kubenswrapper[4897]: E0228 13:20:12.731809 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert podName:ae9a4771-065d-4f75-8d15-0ea8525cbaf4 nodeName:}" failed. No retries permitted until 2026-02-28 13:20:13.231787611 +0000 UTC m=+227.474108268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert") pod "route-controller-manager-6576b87f9c-jwnlh" (UID: "ae9a4771-065d-4f75-8d15-0ea8525cbaf4") : failed to sync secret cache: timed out waiting for the condition Feb 28 13:20:12 crc kubenswrapper[4897]: E0228 13:20:12.734763 4897 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 28 13:20:12 crc kubenswrapper[4897]: E0228 13:20:12.734869 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca podName:ae9a4771-065d-4f75-8d15-0ea8525cbaf4 nodeName:}" failed. No retries permitted until 2026-02-28 13:20:13.234841441 +0000 UTC m=+227.477162138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca") pod "route-controller-manager-6576b87f9c-jwnlh" (UID: "ae9a4771-065d-4f75-8d15-0ea8525cbaf4") : failed to sync configmap cache: timed out waiting for the condition Feb 28 13:20:12 crc kubenswrapper[4897]: E0228 13:20:12.738843 4897 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 28 13:20:12 crc kubenswrapper[4897]: E0228 13:20:12.738935 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-config podName:df2319dd-b85c-4542-bf25-8233ecda9d78 nodeName:}" failed. No retries permitted until 2026-02-28 13:20:13.238911044 +0000 UTC m=+227.481231731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-config") pod "machine-api-operator-5694c8668f-zkvs9" (UID: "df2319dd-b85c-4542-bf25-8233ecda9d78") : failed to sync configmap cache: timed out waiting for the condition Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.740844 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2319dd-b85c-4542-bf25-8233ecda9d78-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.748496 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 13:20:12 crc kubenswrapper[4897]: E0228 13:20:12.751357 4897 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 28 13:20:12 crc kubenswrapper[4897]: E0228 13:20:12.751506 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config podName:ae9a4771-065d-4f75-8d15-0ea8525cbaf4 nodeName:}" failed. No retries permitted until 2026-02-28 13:20:13.251489636 +0000 UTC m=+227.493810283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config") pod "route-controller-manager-6576b87f9c-jwnlh" (UID: "ae9a4771-065d-4f75-8d15-0ea8525cbaf4") : failed to sync configmap cache: timed out waiting for the condition Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.769228 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.789674 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.809454 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.828778 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.849095 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.868639 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.889554 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.909081 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.929628 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.948724 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.969604 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 28 13:20:12 crc kubenswrapper[4897]: I0228 13:20:12.988825 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.009502 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.029027 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.048937 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.069564 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.088451 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.109953 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.129986 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.149425 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.168715 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.189342 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.209518 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.228908 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.249690 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.269742 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.275960 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.276251 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-config\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.276468 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.276537 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.277515 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2319dd-b85c-4542-bf25-8233ecda9d78-config\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.278143 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.280887 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.281919 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.319379 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlg6\" (UniqueName: \"kubernetes.io/projected/536efe5c-a55e-48a2-920e-cdb34a2bce57-kube-api-access-7jlg6\") pod \"machine-approver-56656f9798-7pzqp\" (UID: \"536efe5c-a55e-48a2-920e-cdb34a2bce57\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.334967 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4km6\" (UniqueName: \"kubernetes.io/projected/3423cf07-c57b-41f3-82da-f497649699db-kube-api-access-t4km6\") pod \"console-f9d7485db-rd9tl\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.336079 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" Feb 28 13:20:13 crc kubenswrapper[4897]: W0228 13:20:13.360896 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod536efe5c_a55e_48a2_920e_cdb34a2bce57.slice/crio-df6dd2ee9e3d299c53b1529599f4562c33ab1c89080b28c85ccfdcc95b9968dc WatchSource:0}: Error finding container df6dd2ee9e3d299c53b1529599f4562c33ab1c89080b28c85ccfdcc95b9968dc: Status 404 returned error can't find the container with id df6dd2ee9e3d299c53b1529599f4562c33ab1c89080b28c85ccfdcc95b9968dc Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.366752 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt6l6\" (UniqueName: \"kubernetes.io/projected/52ee48cc-65ac-4228-821c-f9c70d249ebf-kube-api-access-nt6l6\") pod \"openshift-controller-manager-operator-756b6f6bc6-fb8mb\" (UID: \"52ee48cc-65ac-4228-821c-f9c70d249ebf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.376972 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjk5p\" (UniqueName: \"kubernetes.io/projected/6a6fd805-dce5-4ee6-82e0-9fce53deed7f-kube-api-access-rjk5p\") pod \"machine-config-operator-74547568cd-ccsdd\" (UID: \"6a6fd805-dce5-4ee6-82e0-9fce53deed7f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.395832 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sldxp\" (UniqueName: \"kubernetes.io/projected/61f10600-21dd-4043-af69-aa0fdfd246f7-kube-api-access-sldxp\") pod \"controller-manager-879f6c89f-95h9j\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.419294 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzdr2\" (UniqueName: \"kubernetes.io/projected/4f2510cb-e89f-49a0-b5cd-aca1a5c51178-kube-api-access-nzdr2\") pod \"downloads-7954f5f757-fv2rz\" (UID: \"4f2510cb-e89f-49a0-b5cd-aca1a5c51178\") " pod="openshift-console/downloads-7954f5f757-fv2rz" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.428368 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5qlz\" (UniqueName: \"kubernetes.io/projected/26da04fb-0109-4f7f-a283-f489e9b4596f-kube-api-access-k5qlz\") pod \"apiserver-7bbb656c7d-k2ztc\" (UID: \"26da04fb-0109-4f7f-a283-f489e9b4596f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.434605 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.444473 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7mkr\" (UniqueName: \"kubernetes.io/projected/6e1c97b9-64a9-4e15-947f-16a7d1dd4271-kube-api-access-g7mkr\") pod \"authentication-operator-69f744f599-gzgt9\" (UID: \"6e1c97b9-64a9-4e15-947f-16a7d1dd4271\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.469761 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c66153dc-f2e3-4798-876c-da6826dea18c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.488593 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9n2z\" (UniqueName: \"kubernetes.io/projected/c66153dc-f2e3-4798-876c-da6826dea18c-kube-api-access-r9n2z\") pod \"cluster-image-registry-operator-dc59b4c8b-g6wd9\" (UID: \"c66153dc-f2e3-4798-876c-da6826dea18c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.509069 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.513848 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntszw\" (UniqueName: \"kubernetes.io/projected/34853f18-08de-4bb6-8fc1-9ae1d51b314a-kube-api-access-ntszw\") pod \"console-operator-58897d9998-fq58q\" (UID: \"34853f18-08de-4bb6-8fc1-9ae1d51b314a\") " pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.528807 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.551429 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.572259 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.580529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.580986 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fv2rz" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.588502 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.589504 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.599810 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.607667 4897 request.go:700] Waited for 1.907533071s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.611344 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.611501 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.629987 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rd9tl"] Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.630532 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.643725 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" event={"ID":"536efe5c-a55e-48a2-920e-cdb34a2bce57","Type":"ContainerStarted","Data":"df6dd2ee9e3d299c53b1529599f4562c33ab1c89080b28c85ccfdcc95b9968dc"} Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.649231 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.667050 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.669200 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.675669 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.690865 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.712299 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.729006 4897 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.748518 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.772972 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.821558 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8p57\" (UniqueName: \"kubernetes.io/projected/342631a0-9c4d-4e4f-9743-4d13ea740a55-kube-api-access-z8p57\") pod \"router-default-5444994796-5fwp4\" (UID: \"342631a0-9c4d-4e4f-9743-4d13ea740a55\") " pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.827527 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4lgd\" (UniqueName: \"kubernetes.io/projected/a81ddd0f-39bc-4645-94b0-38869e4afba3-kube-api-access-q4lgd\") pod \"kube-storage-version-migrator-operator-b67b599dd-87qp9\" (UID: \"a81ddd0f-39bc-4645-94b0-38869e4afba3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.846549 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xph\" (UniqueName: \"kubernetes.io/projected/a52c7385-4178-4038-93b0-5cd758958e80-kube-api-access-n9xph\") pod \"auto-csr-approver-29538080-qcrrw\" (UID: \"a52c7385-4178-4038-93b0-5cd758958e80\") " pod="openshift-infra/auto-csr-approver-29538080-qcrrw" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.865113 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvcsj\" (UniqueName: \"kubernetes.io/projected/4ba075a8-61d1-4147-80ea-03906930ff87-kube-api-access-tvcsj\") pod \"package-server-manager-789f6589d5-vk9dl\" (UID: \"4ba075a8-61d1-4147-80ea-03906930ff87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.877645 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.881561 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntptc\" (UniqueName: \"kubernetes.io/projected/79743a51-c0b2-45b2-99d3-385e0b2f2c6f-kube-api-access-ntptc\") pod \"auto-csr-approver-29538078-hj8mj\" (UID: \"79743a51-c0b2-45b2-99d3-385e0b2f2c6f\") " pod="openshift-infra/auto-csr-approver-29538078-hj8mj" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.905801 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbg5d\" (UniqueName: \"kubernetes.io/projected/c97c55d8-5260-43bc-aaf7-e217a748b83f-kube-api-access-cbg5d\") pod \"service-ca-9c57cc56f-7wzmt\" (UID: \"c97c55d8-5260-43bc-aaf7-e217a748b83f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.922159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xhgd\" (UniqueName: \"kubernetes.io/projected/32420c77-c3bf-489a-b622-a912ea4c983c-kube-api-access-5xhgd\") pod \"catalog-operator-68c6474976-blf86\" (UID: \"32420c77-c3bf-489a-b622-a912ea4c983c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.941967 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f58g\" (UniqueName: \"kubernetes.io/projected/53e254f6-444a-4fd6-8bda-5af18b9d347c-kube-api-access-6f58g\") pod \"marketplace-operator-79b997595-l7m8v\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.956273 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.963097 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmhmt\" (UniqueName: \"kubernetes.io/projected/13ffe813-5e11-41e0-9426-8771f8b2ce0b-kube-api-access-bmhmt\") pod \"migrator-59844c95c7-bxsf4\" (UID: \"13ffe813-5e11-41e0-9426-8771f8b2ce0b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.975902 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.982116 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.985428 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2228\" (UniqueName: \"kubernetes.io/projected/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-kube-api-access-r2228\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:13 crc kubenswrapper[4897]: I0228 13:20:13.989736 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.005467 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2255cdfc-6996-4567-ba4d-b1b609f1264c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l5dbb\" (UID: \"2255cdfc-6996-4567-ba4d-b1b609f1264c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.009956 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq58q"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.015013 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.017568 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.024120 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10286867-4aba-45e5-a1f3-40494acb8cde-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2f58z\" (UID: \"10286867-4aba-45e5-a1f3-40494acb8cde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.045493 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrwqb\" (UniqueName: \"kubernetes.io/projected/3d1d3880-3411-45f5-8835-a4db59c38cfe-kube-api-access-wrwqb\") pod \"packageserver-d55dfcdfc-6st7l\" (UID: \"3d1d3880-3411-45f5-8835-a4db59c38cfe\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.077466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce549223-07dd-40b8-b988-7a49ed1a94e5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wgmqx\" (UID: \"ce549223-07dd-40b8-b988-7a49ed1a94e5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.087897 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmgdw\" (UniqueName: \"kubernetes.io/projected/9a147d2f-de25-4ba1-8858-392c56b60a20-kube-api-access-zmgdw\") pod \"apiserver-76f77b778f-t87x8\" (UID: \"9a147d2f-de25-4ba1-8858-392c56b60a20\") " pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.101967 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fv2rz"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.107751 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w28d\" (UniqueName: \"kubernetes.io/projected/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-kube-api-access-5w28d\") pod \"collect-profiles-29538075-ts824\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.116827 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.121479 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.125292 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-95h9j"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.131603 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6l6f\" (UniqueName: \"kubernetes.io/projected/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-kube-api-access-h6l6f\") pod \"route-controller-manager-6576b87f9c-jwnlh\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.157105 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f6a4bf66-c081-492e-aa28-f9245e7ffe3c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nx8lb\" (UID: \"f6a4bf66-c081-492e-aa28-f9245e7ffe3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.167655 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.172725 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktz7d\" (UniqueName: \"kubernetes.io/projected/5a185ef3-d57d-4925-b2d6-6de53cf0d0f2-kube-api-access-ktz7d\") pod \"machine-config-controller-84d6567774-xfrkv\" (UID: \"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.172816 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.183483 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gzgt9"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.184137 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.192073 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mggrx\" (UniqueName: \"kubernetes.io/projected/49308413-0bd0-4aef-8d1b-451b077e6996-kube-api-access-mggrx\") pod \"control-plane-machine-set-operator-78cbb6b69f-glzrp\" (UID: \"49308413-0bd0-4aef-8d1b-451b077e6996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.198459 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.202296 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.219766 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjb46\" (UniqueName: \"kubernetes.io/projected/df2319dd-b85c-4542-bf25-8233ecda9d78-kube-api-access-zjb46\") pod \"machine-api-operator-5694c8668f-zkvs9\" (UID: \"df2319dd-b85c-4542-bf25-8233ecda9d78\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.219985 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.224211 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.227064 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22vp2\" (UniqueName: \"kubernetes.io/projected/4ab51603-514f-4ade-8bf2-6281d27a579f-kube-api-access-22vp2\") pod \"openshift-config-operator-7777fb866f-mfx26\" (UID: \"4ab51603-514f-4ade-8bf2-6281d27a579f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.228755 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.234387 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.236658 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.238807 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:14 crc kubenswrapper[4897]: W0228 13:20:14.244541 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26da04fb_0109_4f7f_a283_f489e9b4596f.slice/crio-314696cb485c5ac16d6a8a1282dc8c70223501920b7f10a742068b8b34333280 WatchSource:0}: Error finding container 314696cb485c5ac16d6a8a1282dc8c70223501920b7f10a742068b8b34333280: Status 404 returned error can't find the container with id 314696cb485c5ac16d6a8a1282dc8c70223501920b7f10a742068b8b34333280 Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.254810 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.257603 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drq49\" (UniqueName: \"kubernetes.io/projected/260888c5-6c67-47a8-9903-a25a8a1c6b7d-kube-api-access-drq49\") pod \"dns-operator-744455d44c-nsbjk\" (UID: \"260888c5-6c67-47a8-9903-a25a8a1c6b7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.271967 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.275620 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.279148 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrr8r\" (UniqueName: \"kubernetes.io/projected/9a9c0df9-c002-43ec-bc67-dee3c0862056-kube-api-access-lrr8r\") pod \"etcd-operator-b45778765-25vlq\" (UID: \"9a9c0df9-c002-43ec-bc67-dee3c0862056\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:14 crc kubenswrapper[4897]: W0228 13:20:14.284012 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52ee48cc_65ac_4228_821c_f9c70d249ebf.slice/crio-415ce7623f57344bee907ab5c146ffb46e2aa16b38a4fc4a00f07af5e008ffde WatchSource:0}: Error finding container 415ce7623f57344bee907ab5c146ffb46e2aa16b38a4fc4a00f07af5e008ffde: Status 404 returned error can't find the container with id 415ce7623f57344bee907ab5c146ffb46e2aa16b38a4fc4a00f07af5e008ffde Feb 28 13:20:14 crc kubenswrapper[4897]: W0228 13:20:14.287465 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e1c97b9_64a9_4e15_947f_16a7d1dd4271.slice/crio-fd16bc41ef93619d3a1f5e92dd614319acfd1491b7ccc3afe5feb978b633180b WatchSource:0}: Error finding container fd16bc41ef93619d3a1f5e92dd614319acfd1491b7ccc3afe5feb978b633180b: Status 404 returned error can't find the container with id fd16bc41ef93619d3a1f5e92dd614319acfd1491b7ccc3afe5feb978b633180b Feb 28 13:20:14 crc kubenswrapper[4897]: W0228 13:20:14.291422 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc66153dc_f2e3_4798_876c_da6826dea18c.slice/crio-eb6389b14fdf84f1bd99ad3ac077138ef3e82d915bfdd8e7c0ea6ded409712c9 WatchSource:0}: Error finding container eb6389b14fdf84f1bd99ad3ac077138ef3e82d915bfdd8e7c0ea6ded409712c9: Status 404 returned error can't find the container with id eb6389b14fdf84f1bd99ad3ac077138ef3e82d915bfdd8e7c0ea6ded409712c9 Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.292022 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.307472 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.309790 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqnk\" (UniqueName: \"kubernetes.io/projected/49d0a669-bb05-4da5-9e58-789b58c0797b-kube-api-access-gmqnk\") pod \"oauth-openshift-558db77b4-84hkx\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.310710 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8bqs\" (UniqueName: \"kubernetes.io/projected/ac747027-4a87-46fc-87e2-fca7e049f863-kube-api-access-r8bqs\") pod \"olm-operator-6b444d44fb-jbqv4\" (UID: \"ac747027-4a87-46fc-87e2-fca7e049f863\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.319020 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538080-qcrrw"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.341453 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.345222 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.360210 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.367761 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.383639 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.383919 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538078-hj8mj"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.395729 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc588d9f-c6ff-49ac-a670-f886dfc561fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.395758 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c979449-b6ad-40e0-b3d6-584861b1d143-metrics-tls\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.395825 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-certificates\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.395859 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjqjk\" (UniqueName: \"kubernetes.io/projected/4c979449-b6ad-40e0-b3d6-584861b1d143-kube-api-access-wjqjk\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.395894 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwqn\" (UniqueName: \"kubernetes.io/projected/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-kube-api-access-6nwqn\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.395945 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdt97\" (UniqueName: \"kubernetes.io/projected/bc588d9f-c6ff-49ac-a670-f886dfc561fc-kube-api-access-bdt97\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.395961 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c979449-b6ad-40e0-b3d6-584861b1d143-config-volume\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396041 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-bound-sa-token\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396075 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-tls\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396100 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v5c7\" (UniqueName: \"kubernetes.io/projected/24e813e9-4e63-45b2-924b-7fee90b8a3ed-kube-api-access-9v5c7\") pod \"cluster-samples-operator-665b6dd947-lb9zj\" (UID: \"24e813e9-4e63-45b2-924b-7fee90b8a3ed\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396126 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/24e813e9-4e63-45b2-924b-7fee90b8a3ed-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lb9zj\" (UID: \"24e813e9-4e63-45b2-924b-7fee90b8a3ed\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396350 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-config\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396384 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a017c06-8f6f-4638-ae70-2715eb539d7c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396467 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx7gt\" (UniqueName: \"kubernetes.io/projected/9ca4767a-66de-4596-a4ec-4929fb1bb3d5-kube-api-access-lx7gt\") pod \"multus-admission-controller-857f4d67dd-rr6bc\" (UID: \"9ca4767a-66de-4596-a4ec-4929fb1bb3d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396503 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a017c06-8f6f-4638-ae70-2715eb539d7c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396532 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396620 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsrw\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-kube-api-access-wmsrw\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396638 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-serving-cert\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396719 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ca4767a-66de-4596-a4ec-4929fb1bb3d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rr6bc\" (UID: \"9ca4767a-66de-4596-a4ec-4929fb1bb3d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396762 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc588d9f-c6ff-49ac-a670-f886dfc561fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.396800 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-trusted-ca\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: E0228 13:20:14.402280 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:14.902257954 +0000 UTC m=+229.144578701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.431446 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.446890 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.497967 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498548 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx7gt\" (UniqueName: \"kubernetes.io/projected/9ca4767a-66de-4596-a4ec-4929fb1bb3d5-kube-api-access-lx7gt\") pod \"multus-admission-controller-857f4d67dd-rr6bc\" (UID: \"9ca4767a-66de-4596-a4ec-4929fb1bb3d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498596 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a017c06-8f6f-4638-ae70-2715eb539d7c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498649 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-plugins-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498680 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-mountpoint-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498724 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsrw\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-kube-api-access-wmsrw\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498749 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-serving-cert\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498773 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/62082696-f7f4-4928-8dd0-07604c22e511-certs\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498797 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ca4767a-66de-4596-a4ec-4929fb1bb3d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rr6bc\" (UID: \"9ca4767a-66de-4596-a4ec-4929fb1bb3d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498849 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc588d9f-c6ff-49ac-a670-f886dfc561fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498882 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twth5\" (UniqueName: \"kubernetes.io/projected/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-kube-api-access-twth5\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498905 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-trusted-ca\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498975 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crfm\" (UniqueName: \"kubernetes.io/projected/62082696-f7f4-4928-8dd0-07604c22e511-kube-api-access-9crfm\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.498998 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsckh\" (UniqueName: \"kubernetes.io/projected/1f5773fa-4b37-4a12-9102-ea3f54a2dd78-kube-api-access-dsckh\") pod \"ingress-canary-5xn24\" (UID: \"1f5773fa-4b37-4a12-9102-ea3f54a2dd78\") " pod="openshift-ingress-canary/ingress-canary-5xn24" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499240 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-csi-data-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499263 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc588d9f-c6ff-49ac-a670-f886dfc561fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499300 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c979449-b6ad-40e0-b3d6-584861b1d143-metrics-tls\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499340 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-certificates\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499368 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-socket-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499391 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjqjk\" (UniqueName: \"kubernetes.io/projected/4c979449-b6ad-40e0-b3d6-584861b1d143-kube-api-access-wjqjk\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499417 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwqn\" (UniqueName: \"kubernetes.io/projected/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-kube-api-access-6nwqn\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499440 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c979449-b6ad-40e0-b3d6-584861b1d143-config-volume\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499465 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdt97\" (UniqueName: \"kubernetes.io/projected/bc588d9f-c6ff-49ac-a670-f886dfc561fc-kube-api-access-bdt97\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499490 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f5773fa-4b37-4a12-9102-ea3f54a2dd78-cert\") pod \"ingress-canary-5xn24\" (UID: \"1f5773fa-4b37-4a12-9102-ea3f54a2dd78\") " pod="openshift-ingress-canary/ingress-canary-5xn24" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499514 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-bound-sa-token\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499539 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-tls\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/24e813e9-4e63-45b2-924b-7fee90b8a3ed-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lb9zj\" (UID: \"24e813e9-4e63-45b2-924b-7fee90b8a3ed\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499583 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v5c7\" (UniqueName: \"kubernetes.io/projected/24e813e9-4e63-45b2-924b-7fee90b8a3ed-kube-api-access-9v5c7\") pod \"cluster-samples-operator-665b6dd947-lb9zj\" (UID: \"24e813e9-4e63-45b2-924b-7fee90b8a3ed\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499652 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-registration-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499701 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-config\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499744 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a017c06-8f6f-4638-ae70-2715eb539d7c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.499778 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/62082696-f7f4-4928-8dd0-07604c22e511-node-bootstrap-token\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: E0228 13:20:14.503875 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.003855433 +0000 UTC m=+229.246176090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.504862 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc588d9f-c6ff-49ac-a670-f886dfc561fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.505915 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/24e813e9-4e63-45b2-924b-7fee90b8a3ed-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lb9zj\" (UID: \"24e813e9-4e63-45b2-924b-7fee90b8a3ed\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.506817 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-config\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.507395 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.511964 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc588d9f-c6ff-49ac-a670-f886dfc561fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.512557 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c979449-b6ad-40e0-b3d6-584861b1d143-config-volume\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.517867 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c979449-b6ad-40e0-b3d6-584861b1d143-metrics-tls\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.518510 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.520724 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ca4767a-66de-4596-a4ec-4929fb1bb3d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rr6bc\" (UID: \"9ca4767a-66de-4596-a4ec-4929fb1bb3d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.528714 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-serving-cert\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.528925 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a017c06-8f6f-4638-ae70-2715eb539d7c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.530134 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a017c06-8f6f-4638-ae70-2715eb539d7c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.530425 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-tls\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.531230 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-certificates\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.531437 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-trusted-ca\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.549688 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.554081 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx7gt\" (UniqueName: \"kubernetes.io/projected/9ca4767a-66de-4596-a4ec-4929fb1bb3d5-kube-api-access-lx7gt\") pod \"multus-admission-controller-857f4d67dd-rr6bc\" (UID: \"9ca4767a-66de-4596-a4ec-4929fb1bb3d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.568884 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.576076 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsrw\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-kube-api-access-wmsrw\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.590759 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjqjk\" (UniqueName: \"kubernetes.io/projected/4c979449-b6ad-40e0-b3d6-584861b1d143-kube-api-access-wjqjk\") pod \"dns-default-6l67l\" (UID: \"4c979449-b6ad-40e0-b3d6-584861b1d143\") " pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.600506 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.600620 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-plugins-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.600700 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-mountpoint-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.600780 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/62082696-f7f4-4928-8dd0-07604c22e511-certs\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.600856 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twth5\" (UniqueName: \"kubernetes.io/projected/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-kube-api-access-twth5\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.600926 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crfm\" (UniqueName: \"kubernetes.io/projected/62082696-f7f4-4928-8dd0-07604c22e511-kube-api-access-9crfm\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.601178 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsckh\" (UniqueName: \"kubernetes.io/projected/1f5773fa-4b37-4a12-9102-ea3f54a2dd78-kube-api-access-dsckh\") pod \"ingress-canary-5xn24\" (UID: \"1f5773fa-4b37-4a12-9102-ea3f54a2dd78\") " pod="openshift-ingress-canary/ingress-canary-5xn24" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.601207 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-csi-data-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.601227 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-socket-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.601261 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f5773fa-4b37-4a12-9102-ea3f54a2dd78-cert\") pod \"ingress-canary-5xn24\" (UID: \"1f5773fa-4b37-4a12-9102-ea3f54a2dd78\") " pod="openshift-ingress-canary/ingress-canary-5xn24" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.601293 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-registration-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.601352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/62082696-f7f4-4928-8dd0-07604c22e511-node-bootstrap-token\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.602754 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-mountpoint-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: E0228 13:20:14.602926 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.102913938 +0000 UTC m=+229.345234595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.602970 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-plugins-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.603008 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-socket-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.603114 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-csi-data-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.604561 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-registration-dir\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.605269 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/62082696-f7f4-4928-8dd0-07604c22e511-node-bootstrap-token\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.610127 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/62082696-f7f4-4928-8dd0-07604c22e511-certs\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.613478 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f5773fa-4b37-4a12-9102-ea3f54a2dd78-cert\") pod \"ingress-canary-5xn24\" (UID: \"1f5773fa-4b37-4a12-9102-ea3f54a2dd78\") " pod="openshift-ingress-canary/ingress-canary-5xn24" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.613815 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdt97\" (UniqueName: \"kubernetes.io/projected/bc588d9f-c6ff-49ac-a670-f886dfc561fc-kube-api-access-bdt97\") pod \"openshift-apiserver-operator-796bbdcf4f-z4hrw\" (UID: \"bc588d9f-c6ff-49ac-a670-f886dfc561fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.635358 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwqn\" (UniqueName: \"kubernetes.io/projected/67f6ecb8-e29d-4dd2-844a-d5d347453b6e-kube-api-access-6nwqn\") pod \"service-ca-operator-777779d784-td8r5\" (UID: \"67f6ecb8-e29d-4dd2-844a-d5d347453b6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.647449 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.648439 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" event={"ID":"4ba075a8-61d1-4147-80ea-03906930ff87","Type":"ContainerStarted","Data":"2146847c5e6a812e2b15f01ffcdb1b700ee5d95c5e5a2233130e3ec4ed79f8c3"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.649807 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v5c7\" (UniqueName: \"kubernetes.io/projected/24e813e9-4e63-45b2-924b-7fee90b8a3ed-kube-api-access-9v5c7\") pod \"cluster-samples-operator-665b6dd947-lb9zj\" (UID: \"24e813e9-4e63-45b2-924b-7fee90b8a3ed\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.650441 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rd9tl" event={"ID":"3423cf07-c57b-41f3-82da-f497649699db","Type":"ContainerStarted","Data":"e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.650472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rd9tl" event={"ID":"3423cf07-c57b-41f3-82da-f497649699db","Type":"ContainerStarted","Data":"351f8404c82b5d60438845f8e04653de30fbb6cd608363c4eb28eae7d8a6807c"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.651849 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" event={"ID":"a81ddd0f-39bc-4645-94b0-38869e4afba3","Type":"ContainerStarted","Data":"2e99d491b447c622f5f244e196b9912d12cf82836d63b8ea4429c33bea95eb6a"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.652570 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.652931 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" event={"ID":"2255cdfc-6996-4567-ba4d-b1b609f1264c","Type":"ContainerStarted","Data":"8443e9d99d5d3a515ba47c57e1dc2d1c7e1af8de101d7ad53449a02a3ad469b5"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.654492 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fv2rz" event={"ID":"4f2510cb-e89f-49a0-b5cd-aca1a5c51178","Type":"ContainerStarted","Data":"63c73c520dfe5b26dd0803e5ad59bb9da23173104168faa9ad232d25d5d44fc1"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.656203 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fq58q" event={"ID":"34853f18-08de-4bb6-8fc1-9ae1d51b314a","Type":"ContainerStarted","Data":"8b5f9e96517f569b7f77a59c9f4293122de345629b95d191a645c95d609eddb0"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.656231 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.656241 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fq58q" event={"ID":"34853f18-08de-4bb6-8fc1-9ae1d51b314a","Type":"ContainerStarted","Data":"61ecc68c215f4e77b33f80980b5c98630a2071cd073dd829faa63ac790532d96"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.657716 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-fq58q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.657752 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fq58q" podUID="34853f18-08de-4bb6-8fc1-9ae1d51b314a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.658810 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" event={"ID":"26da04fb-0109-4f7f-a283-f489e9b4596f","Type":"ContainerStarted","Data":"314696cb485c5ac16d6a8a1282dc8c70223501920b7f10a742068b8b34333280"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.660758 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" event={"ID":"61f10600-21dd-4043-af69-aa0fdfd246f7","Type":"ContainerStarted","Data":"17e607d6ad4bf69e0eae2417202bf44971b0e68842c9b95a5b01f8b90d0c98d3"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.662089 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-bound-sa-token\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.662730 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" event={"ID":"3d1d3880-3411-45f5-8835-a4db59c38cfe","Type":"ContainerStarted","Data":"68631efc047af02b8447e6a5862f992193df936906b63bdf6c7e46d75845a4d6"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.664166 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" event={"ID":"52ee48cc-65ac-4228-821c-f9c70d249ebf","Type":"ContainerStarted","Data":"415ce7623f57344bee907ab5c146ffb46e2aa16b38a4fc4a00f07af5e008ffde"} Feb 28 13:20:14 crc kubenswrapper[4897]: W0228 13:20:14.664245 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce549223_07dd_40b8_b988_7a49ed1a94e5.slice/crio-5ed49396a5bdd01515faf7bd88931e93a5986e686de9c796cc4fe2827f189a4a WatchSource:0}: Error finding container 5ed49396a5bdd01515faf7bd88931e93a5986e686de9c796cc4fe2827f189a4a: Status 404 returned error can't find the container with id 5ed49396a5bdd01515faf7bd88931e93a5986e686de9c796cc4fe2827f189a4a Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.664449 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.665137 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" event={"ID":"536efe5c-a55e-48a2-920e-cdb34a2bce57","Type":"ContainerStarted","Data":"82946f2c5b5ca6dee06135bca8bebf52c6256806fb7a54c286ea1643cf4e396d"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.667856 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" event={"ID":"79743a51-c0b2-45b2-99d3-385e0b2f2c6f","Type":"ContainerStarted","Data":"6e4d4d4cf90394f6789f2122a9371c916a9dfa97bae501d213646b0008c77525"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.670019 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" event={"ID":"6e1c97b9-64a9-4e15-947f-16a7d1dd4271","Type":"ContainerStarted","Data":"fd16bc41ef93619d3a1f5e92dd614319acfd1491b7ccc3afe5feb978b633180b"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.671508 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" event={"ID":"a52c7385-4178-4038-93b0-5cd758958e80","Type":"ContainerStarted","Data":"b770bf9e791e0fcc0cd4ea675f2af48d6ce5428666c4097a80bb6978fcbb8065"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.673014 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" event={"ID":"32420c77-c3bf-489a-b622-a912ea4c983c","Type":"ContainerStarted","Data":"1b30a36fbbdc1772fafbc946190e2a47214808f0857ce8d3b8c372e2c217bf8e"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.673848 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5fwp4" event={"ID":"342631a0-9c4d-4e4f-9743-4d13ea740a55","Type":"ContainerStarted","Data":"34b8b2a68c11a0e46a7c0be28c33e4ef016d9384e91e4d274cfd472af615634a"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.673870 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5fwp4" event={"ID":"342631a0-9c4d-4e4f-9743-4d13ea740a55","Type":"ContainerStarted","Data":"d5dd7802143429dea9227bb02bdca8fe3ed0ee9ac2a52d050c283ec00df5976b"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.674101 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.674674 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" event={"ID":"6a6fd805-dce5-4ee6-82e0-9fce53deed7f","Type":"ContainerStarted","Data":"46be115bcfa586b294dc274d28a6997e7df5d0180f0b492e43c3ab80c81d3387"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.675236 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" event={"ID":"c66153dc-f2e3-4798-876c-da6826dea18c","Type":"ContainerStarted","Data":"eb6389b14fdf84f1bd99ad3ac077138ef3e82d915bfdd8e7c0ea6ded409712c9"} Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.686900 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.702322 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:14 crc kubenswrapper[4897]: E0228 13:20:14.702489 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.20247243 +0000 UTC m=+229.444793077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.702887 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.703151 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:14 crc kubenswrapper[4897]: E0228 13:20:14.703348 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.203335308 +0000 UTC m=+229.445655965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.711950 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crfm\" (UniqueName: \"kubernetes.io/projected/62082696-f7f4-4928-8dd0-07604c22e511-kube-api-access-9crfm\") pod \"machine-config-server-kkdfw\" (UID: \"62082696-f7f4-4928-8dd0-07604c22e511\") " pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.729460 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsckh\" (UniqueName: \"kubernetes.io/projected/1f5773fa-4b37-4a12-9102-ea3f54a2dd78-kube-api-access-dsckh\") pod \"ingress-canary-5xn24\" (UID: \"1f5773fa-4b37-4a12-9102-ea3f54a2dd78\") " pod="openshift-ingress-canary/ingress-canary-5xn24" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.730847 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkdfw" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.753857 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twth5\" (UniqueName: \"kubernetes.io/projected/42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf-kube-api-access-twth5\") pod \"csi-hostpathplugin-q9cdm\" (UID: \"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf\") " pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.810737 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:14 crc kubenswrapper[4897]: E0228 13:20:14.818889 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.318858922 +0000 UTC m=+229.561179589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.880454 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.916342 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.916387 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.917879 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:14 crc kubenswrapper[4897]: E0228 13:20:14.918283 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.418267459 +0000 UTC m=+229.660588116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.982114 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7wzmt"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.984415 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.988593 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mfx26"] Feb 28 13:20:14 crc kubenswrapper[4897]: I0228 13:20:14.991913 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l7m8v"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.015729 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5xn24" Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.018617 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.018836 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.518808933 +0000 UTC m=+229.761129590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.019095 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.019411 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.519398102 +0000 UTC m=+229.761718759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.043646 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" Feb 28 13:20:15 crc kubenswrapper[4897]: W0228 13:20:15.092153 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ab51603_514f_4ade_8bf2_6281d27a579f.slice/crio-f2a9c326c624d642fa668d0102d9647eaec7ed3cd2e2b53d837b9ca80f8ea8f0 WatchSource:0}: Error finding container f2a9c326c624d642fa668d0102d9647eaec7ed3cd2e2b53d837b9ca80f8ea8f0: Status 404 returned error can't find the container with id f2a9c326c624d642fa668d0102d9647eaec7ed3cd2e2b53d837b9ca80f8ea8f0 Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.097026 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.117723 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t87x8"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.120088 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.120460 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.620444632 +0000 UTC m=+229.862765279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: W0228 13:20:15.169518 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3dcfd2e_5074_4d1f_88b3_4aa34c63c3d1.slice/crio-a49cee6f63e1480b2ca684d6771170e899ad128127082bca41eafabb6c247d64 WatchSource:0}: Error finding container a49cee6f63e1480b2ca684d6771170e899ad128127082bca41eafabb6c247d64: Status 404 returned error can't find the container with id a49cee6f63e1480b2ca684d6771170e899ad128127082bca41eafabb6c247d64 Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.221097 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.221409 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.721396949 +0000 UTC m=+229.963717606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.267679 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.278260 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zkvs9"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.279683 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.295723 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-84hkx"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.322175 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.322531 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.822516092 +0000 UTC m=+230.064836749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.341630 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.343436 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.388253 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25vlq"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.392288 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.423911 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.424427 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:15.92441471 +0000 UTC m=+230.166735367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: W0228 13:20:15.430533 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae9a4771_065d_4f75_8d15_0ea8525cbaf4.slice/crio-ab34c38b156bfa5378119d719fe5bbf7996a0263d82cae5a6dae2099fb8cb15d WatchSource:0}: Error finding container ab34c38b156bfa5378119d719fe5bbf7996a0263d82cae5a6dae2099fb8cb15d: Status 404 returned error can't find the container with id ab34c38b156bfa5378119d719fe5bbf7996a0263d82cae5a6dae2099fb8cb15d Feb 28 13:20:15 crc kubenswrapper[4897]: W0228 13:20:15.441717 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac747027_4a87_46fc_87e2_fca7e049f863.slice/crio-c688f6ee654b050e761f26ba512a4c108242920d0d9019c64cc6a2191698e06c WatchSource:0}: Error finding container c688f6ee654b050e761f26ba512a4c108242920d0d9019c64cc6a2191698e06c: Status 404 returned error can't find the container with id c688f6ee654b050e761f26ba512a4c108242920d0d9019c64cc6a2191698e06c Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.479915 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.491733 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5fwp4" podStartSLOduration=173.491715025 podStartE2EDuration="2m53.491715025s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:15.489516593 +0000 UTC m=+229.731837250" watchObservedRunningTime="2026-02-28 13:20:15.491715025 +0000 UTC m=+229.734035682" Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.525221 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.525381 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.025356797 +0000 UTC m=+230.267677454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.525538 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.525867 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.025859764 +0000 UTC m=+230.268180421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.594471 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.594610 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:20:15 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:20:15 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9xph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538080-qcrrw_openshift-infra(a52c7385-4178-4038-93b0-5cd758958e80): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:20:15 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.595771 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" podUID="a52c7385-4178-4038-93b0-5cd758958e80" Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.598357 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-td8r5"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.626690 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.630525 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.130474861 +0000 UTC m=+230.372795518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.645763 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rr6bc"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.646471 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6l67l"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.681976 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rd9tl" podStartSLOduration=173.681959577 podStartE2EDuration="2m53.681959577s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:15.681274325 +0000 UTC m=+229.923594982" watchObservedRunningTime="2026-02-28 13:20:15.681959577 +0000 UTC m=+229.924280234" Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.683561 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" event={"ID":"4ab51603-514f-4ade-8bf2-6281d27a579f","Type":"ContainerStarted","Data":"f2a9c326c624d642fa668d0102d9647eaec7ed3cd2e2b53d837b9ca80f8ea8f0"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.686034 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" event={"ID":"9a147d2f-de25-4ba1-8858-392c56b60a20","Type":"ContainerStarted","Data":"e366d42b8ca42521d7fb415ca40f17cb6eabdbc31a47c97c84cb0ff1b796511f"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.687341 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" event={"ID":"ae9a4771-065d-4f75-8d15-0ea8525cbaf4","Type":"ContainerStarted","Data":"ab34c38b156bfa5378119d719fe5bbf7996a0263d82cae5a6dae2099fb8cb15d"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.688521 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" event={"ID":"f6a4bf66-c081-492e-aa28-f9245e7ffe3c","Type":"ContainerStarted","Data":"2769ff808f965b21d1d07cf58354083d7b961a6f0d1ce0999421143ac8f94ba8"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.690252 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" event={"ID":"6a6fd805-dce5-4ee6-82e0-9fce53deed7f","Type":"ContainerStarted","Data":"274caa1f32d274874fbe4f0489e52c56f7a803c238c58d1d50fd44b520c4811c"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.691092 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" event={"ID":"13ffe813-5e11-41e0-9426-8771f8b2ce0b","Type":"ContainerStarted","Data":"5003329dadc1a64638ae2c609ac17f49826c89ce4ac57c9ed04b555ef8b2c4cb"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.692738 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" event={"ID":"a81ddd0f-39bc-4645-94b0-38869e4afba3","Type":"ContainerStarted","Data":"5799204db59047eee76ff83c29f460471d8e838239879ce630e335b6a272f09b"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.693476 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" event={"ID":"c97c55d8-5260-43bc-aaf7-e217a748b83f","Type":"ContainerStarted","Data":"80ff74dabab1f9a6bfe634e0f204c5f87050bca3f66b2f42ce7879fca2213e36"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.694095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" event={"ID":"9a9c0df9-c002-43ec-bc67-dee3c0862056","Type":"ContainerStarted","Data":"e0069eb6776a21f8bb5bf896581094f36dbd3627394a3c8399546d7b35611b9e"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.694812 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" event={"ID":"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1","Type":"ContainerStarted","Data":"a49cee6f63e1480b2ca684d6771170e899ad128127082bca41eafabb6c247d64"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.695493 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" event={"ID":"49308413-0bd0-4aef-8d1b-451b077e6996","Type":"ContainerStarted","Data":"d4ba378ae16af301cacce63aad65a552f42d38f5ab757abf30f2edafcb3b1d09"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.697871 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkdfw" event={"ID":"62082696-f7f4-4928-8dd0-07604c22e511","Type":"ContainerStarted","Data":"d7a0772c566cd46c9454112da76d1f45fc7d263d25fd7f6838f58ebe9092e335"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.701925 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.702212 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" event={"ID":"536efe5c-a55e-48a2-920e-cdb34a2bce57","Type":"ContainerStarted","Data":"2b6f1fd18858e93c918bab7c54baaa3af7b34bc96aac7fd5a959e2190fd5f04f"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.704199 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fv2rz" event={"ID":"4f2510cb-e89f-49a0-b5cd-aca1a5c51178","Type":"ContainerStarted","Data":"ef77947c586a03eb39ba79af0708f5cb25f7a00d47d0835c11bc0a0e5d96cdec"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.705147 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" event={"ID":"53e254f6-444a-4fd6-8bda-5af18b9d347c","Type":"ContainerStarted","Data":"1e149e6dcf11f9d15f52f1867523fb1bd5c6768b35e72c39614d2e3a86b1d1e6"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.705874 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" event={"ID":"ce549223-07dd-40b8-b988-7a49ed1a94e5","Type":"ContainerStarted","Data":"5ed49396a5bdd01515faf7bd88931e93a5986e686de9c796cc4fe2827f189a4a"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.706854 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" event={"ID":"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2","Type":"ContainerStarted","Data":"da9d307221219ec5eed39d65eee49215cabfbdba7bca0c44882e555605420c64"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.707893 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" event={"ID":"49d0a669-bb05-4da5-9e58-789b58c0797b","Type":"ContainerStarted","Data":"f54f64c69daf643cb2521a278e762306e8c13eef856758a351d3451b654af0a8"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.709566 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" event={"ID":"61f10600-21dd-4043-af69-aa0fdfd246f7","Type":"ContainerStarted","Data":"db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.712030 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" event={"ID":"10286867-4aba-45e5-a1f3-40494acb8cde","Type":"ContainerStarted","Data":"dfe6f1245a2960be9252771023a2a5122fc9daa13366395fea4235c8d01953cc"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.713051 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" event={"ID":"df2319dd-b85c-4542-bf25-8233ecda9d78","Type":"ContainerStarted","Data":"824a1430005289d902e8dabc21e3cbb9db753b523326176005842a73331b0ce5"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.714649 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" event={"ID":"ac747027-4a87-46fc-87e2-fca7e049f863","Type":"ContainerStarted","Data":"c688f6ee654b050e761f26ba512a4c108242920d0d9019c64cc6a2191698e06c"} Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.716387 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-fq58q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.716435 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fq58q" podUID="34853f18-08de-4bb6-8fc1-9ae1d51b314a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.726712 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5xn24"] Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.727906 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" podUID="a52c7385-4178-4038-93b0-5cd758958e80" Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.731269 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.731600 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.231586683 +0000 UTC m=+230.473907340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.750651 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q9cdm"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.780908 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nsbjk"] Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.832608 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.832794 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.332767698 +0000 UTC m=+230.575088355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.832994 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.834158 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.334150603 +0000 UTC m=+230.576471260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.883191 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.883254 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 28 13:20:15 crc kubenswrapper[4897]: W0228 13:20:15.914670 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc588d9f_c6ff_49ac_a670_f886dfc561fc.slice/crio-2800382ae453d13cfa53233bb6965e9a9d33172a0b785a3a0e12bfe65503ce06 WatchSource:0}: Error finding container 2800382ae453d13cfa53233bb6965e9a9d33172a0b785a3a0e12bfe65503ce06: Status 404 returned error can't find the container with id 2800382ae453d13cfa53233bb6965e9a9d33172a0b785a3a0e12bfe65503ce06 Feb 28 13:20:15 crc kubenswrapper[4897]: W0228 13:20:15.918357 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f6ecb8_e29d_4dd2_844a_d5d347453b6e.slice/crio-a4dc926d3e6df3db2dea331d0e172af68b2a1e2857282323a6093b7c041e306b WatchSource:0}: Error finding container a4dc926d3e6df3db2dea331d0e172af68b2a1e2857282323a6093b7c041e306b: Status 404 returned error can't find the container with id a4dc926d3e6df3db2dea331d0e172af68b2a1e2857282323a6093b7c041e306b Feb 28 13:20:15 crc kubenswrapper[4897]: W0228 13:20:15.922878 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ca4767a_66de_4596_a4ec_4929fb1bb3d5.slice/crio-97eada8ca9e3a26cf5d2948958cda69ed3a969c4f8a270046fb1680bd96d7db2 WatchSource:0}: Error finding container 97eada8ca9e3a26cf5d2948958cda69ed3a969c4f8a270046fb1680bd96d7db2: Status 404 returned error can't find the container with id 97eada8ca9e3a26cf5d2948958cda69ed3a969c4f8a270046fb1680bd96d7db2 Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.933744 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.934028 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.433986094 +0000 UTC m=+230.676306791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:15 crc kubenswrapper[4897]: I0228 13:20:15.934206 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:15 crc kubenswrapper[4897]: E0228 13:20:15.934760 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.434742839 +0000 UTC m=+230.677063536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.036058 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.036791 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.53675121 +0000 UTC m=+230.779071887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.037293 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.037682 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.53766562 +0000 UTC m=+230.779986277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.138834 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.138978 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.638952839 +0000 UTC m=+230.881273496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.139036 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.139363 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.639354692 +0000 UTC m=+230.881675349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.240530 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.240718 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5tms6" podStartSLOduration=174.240706382 podStartE2EDuration="2m54.240706382s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:16.238330734 +0000 UTC m=+230.480651421" watchObservedRunningTime="2026-02-28 13:20:16.240706382 +0000 UTC m=+230.483027039" Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.241469 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.741444996 +0000 UTC m=+230.983765673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.322923 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fq58q" podStartSLOduration=174.322900045 podStartE2EDuration="2m54.322900045s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:16.279347078 +0000 UTC m=+230.521667755" watchObservedRunningTime="2026-02-28 13:20:16.322900045 +0000 UTC m=+230.565220722" Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.343445 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.344028 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.844003156 +0000 UTC m=+231.086323843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.444758 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.444952 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.944922962 +0000 UTC m=+231.187243629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.445171 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.445633 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:16.945616075 +0000 UTC m=+231.187936762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.547767 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.547958 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.047931747 +0000 UTC m=+231.290252444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.548580 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.549009 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.048992692 +0000 UTC m=+231.291313389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.650022 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.651147 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.151122907 +0000 UTC m=+231.393443564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.727151 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6l67l" event={"ID":"4c979449-b6ad-40e0-b3d6-584861b1d143","Type":"ContainerStarted","Data":"c239805499a3a0504f65516bb423b7529d6bc70243cc731748a6c0d579ce95e5"} Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.728485 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5xn24" event={"ID":"1f5773fa-4b37-4a12-9102-ea3f54a2dd78","Type":"ContainerStarted","Data":"0e5c4cd96cc1b875c4219df42a0e198430f803634beb62327cb054060764edc5"} Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.729398 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" event={"ID":"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf","Type":"ContainerStarted","Data":"194b846cd17f7436eb2be48e201fd91e4adad165241b98fa3848ebe2101da5a3"} Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.730555 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" event={"ID":"bc588d9f-c6ff-49ac-a670-f886dfc561fc","Type":"ContainerStarted","Data":"2800382ae453d13cfa53233bb6965e9a9d33172a0b785a3a0e12bfe65503ce06"} Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.731608 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" event={"ID":"67f6ecb8-e29d-4dd2-844a-d5d347453b6e","Type":"ContainerStarted","Data":"a4dc926d3e6df3db2dea331d0e172af68b2a1e2857282323a6093b7c041e306b"} Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.732511 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" event={"ID":"9ca4767a-66de-4596-a4ec-4929fb1bb3d5","Type":"ContainerStarted","Data":"97eada8ca9e3a26cf5d2948958cda69ed3a969c4f8a270046fb1680bd96d7db2"} Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.733299 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" event={"ID":"260888c5-6c67-47a8-9903-a25a8a1c6b7d","Type":"ContainerStarted","Data":"1c3ecb958259f917692dcdebf95c140b6e28875fb96a1f38f62b13ec8c2b2d44"} Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.733568 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fv2rz" Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.735550 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-fv2rz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.735596 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fv2rz" podUID="4f2510cb-e89f-49a0-b5cd-aca1a5c51178" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.752644 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.753225 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.253206782 +0000 UTC m=+231.495527479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.853911 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.854717 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.354626804 +0000 UTC m=+231.596947501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.880212 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.880299 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.890059 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fv2rz" podStartSLOduration=174.890037134 podStartE2EDuration="2m54.890037134s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:16.88561575 +0000 UTC m=+231.127936417" watchObservedRunningTime="2026-02-28 13:20:16.890037134 +0000 UTC m=+231.132357801" Feb 28 13:20:16 crc kubenswrapper[4897]: I0228 13:20:16.957006 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:16 crc kubenswrapper[4897]: E0228 13:20:16.957506 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.457491124 +0000 UTC m=+231.699811791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.058684 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.058927 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.558889126 +0000 UTC m=+231.801209803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.059013 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.059425 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.559411003 +0000 UTC m=+231.801731680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.159636 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.159811 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.659783081 +0000 UTC m=+231.902103768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.159919 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.160258 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.660247507 +0000 UTC m=+231.902568174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.260639 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.260888 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.760859632 +0000 UTC m=+232.003180309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.261128 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.261616 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.761593216 +0000 UTC m=+232.003913913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.362412 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.363294 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.863269227 +0000 UTC m=+232.105589904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.463961 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.464383 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:17.964368989 +0000 UTC m=+232.206689656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.565038 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.565221 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.065184861 +0000 UTC m=+232.307505538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.565476 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.565957 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.065944896 +0000 UTC m=+232.308265563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.666628 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.666880 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.166848192 +0000 UTC m=+232.409168889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.667242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.667694 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.167678059 +0000 UTC m=+232.409998746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.740883 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" event={"ID":"c66153dc-f2e3-4798-876c-da6826dea18c","Type":"ContainerStarted","Data":"85cfd65259d6720b55885a784de1aa6e95b929262bae746a8a6cdd27129a891a"} Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.742253 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-fv2rz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.742348 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fv2rz" podUID="4f2510cb-e89f-49a0-b5cd-aca1a5c51178" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.766764 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-87qp9" podStartSLOduration=175.766737234 podStartE2EDuration="2m55.766737234s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:17.76323793 +0000 UTC m=+232.005558647" watchObservedRunningTime="2026-02-28 13:20:17.766737234 +0000 UTC m=+232.009057901" Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.768882 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.769057 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.26903815 +0000 UTC m=+232.511358817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.769506 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.769885 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.269874737 +0000 UTC m=+232.512195404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.870623 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.870814 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.370769322 +0000 UTC m=+232.613089999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.870999 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.871454 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.371438354 +0000 UTC m=+232.613759031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.879398 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.879468 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 28 13:20:17 crc kubenswrapper[4897]: I0228 13:20:17.972278 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:17 crc kubenswrapper[4897]: E0228 13:20:17.972545 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.472530486 +0000 UTC m=+232.714851143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.074065 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.074605 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.57458804 +0000 UTC m=+232.816908697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.175699 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.175838 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.675811376 +0000 UTC m=+232.918132033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.175982 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.176505 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.676492538 +0000 UTC m=+232.918813195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.276869 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.277106 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.777045712 +0000 UTC m=+233.019366409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.277192 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.277497 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.777485197 +0000 UTC m=+233.019805854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.377729 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.377874 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.877852805 +0000 UTC m=+233.120173452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.377921 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.378213 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.878204656 +0000 UTC m=+233.120525313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.478867 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.479120 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.9790586 +0000 UTC m=+233.221379287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.479209 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.479653 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:18.979632659 +0000 UTC m=+233.221953356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.581086 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.581257 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.081217697 +0000 UTC m=+233.323538374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.581495 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.581923 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.081907909 +0000 UTC m=+233.324228576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.682858 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.683005 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.182983931 +0000 UTC m=+233.425304588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.683168 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.683489 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.183479017 +0000 UTC m=+233.425799674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.748575 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" event={"ID":"10286867-4aba-45e5-a1f3-40494acb8cde","Type":"ContainerStarted","Data":"c054f4b01498f20d83c6f8301ce2bfeab8075065a492debb315db53a76cf896d"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.750560 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" event={"ID":"52ee48cc-65ac-4228-821c-f9c70d249ebf","Type":"ContainerStarted","Data":"db9472b345613c8d75ad6138559557b4bf647953ad3e6b8953c2c461f24dfc77"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.752219 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" event={"ID":"c97c55d8-5260-43bc-aaf7-e217a748b83f","Type":"ContainerStarted","Data":"45ba81eae1b1f36bcb4a76591a87ab78adfcb7e980258e68d8557d64b06b1fbf"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.753861 4897 generic.go:334] "Generic (PLEG): container finished" podID="26da04fb-0109-4f7f-a283-f489e9b4596f" containerID="71fbcec7cd3beef8cfbfd90083c83dc4cfc68719611faf0379ecaaec1adc3acf" exitCode=0 Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.753910 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" event={"ID":"26da04fb-0109-4f7f-a283-f489e9b4596f","Type":"ContainerDied","Data":"71fbcec7cd3beef8cfbfd90083c83dc4cfc68719611faf0379ecaaec1adc3acf"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.758596 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" event={"ID":"4ba075a8-61d1-4147-80ea-03906930ff87","Type":"ContainerStarted","Data":"1b8c62cd4d417f2f159fefdd436754c8c4e2a1dc12c960ad05da81250242ed18"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.759759 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" event={"ID":"24e813e9-4e63-45b2-924b-7fee90b8a3ed","Type":"ContainerStarted","Data":"aeb1b30d340a24c1519b95a76c66c694641b7de3e206ca19a2bffe91aa1f1f6a"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.761567 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" event={"ID":"32420c77-c3bf-489a-b622-a912ea4c983c","Type":"ContainerStarted","Data":"97b838c874bcf0c319e09bb5e91c1a5efcc38a6be2f2b9b8446aef0fdf1d7d03"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.762828 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" event={"ID":"f6a4bf66-c081-492e-aa28-f9245e7ffe3c","Type":"ContainerStarted","Data":"14be805b1d7642a2ab12331800f021d481009757e4e5417f3a6f620ccbf21be2"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.764615 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" event={"ID":"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1","Type":"ContainerStarted","Data":"4b6e793d221218556bd5e1f277096807ef26420eb39f14ae322206c1413b84c5"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.766442 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" event={"ID":"3d1d3880-3411-45f5-8835-a4db59c38cfe","Type":"ContainerStarted","Data":"ecf479a110b6e3ae1424c2e0ea10882e227783a20305baa39ce16ebbb882fe63"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.768135 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" event={"ID":"2255cdfc-6996-4567-ba4d-b1b609f1264c","Type":"ContainerStarted","Data":"7c0807832fee5dbeb1a7bc8a0127ebf779a30ab773b7f9cc70d10e7470cbafa1"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.769694 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" event={"ID":"ce549223-07dd-40b8-b988-7a49ed1a94e5","Type":"ContainerStarted","Data":"a9ea5a306b2d015d0b080ea6a18ff079167066731ff54900bac6d840d104a20b"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.771296 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" event={"ID":"6e1c97b9-64a9-4e15-947f-16a7d1dd4271","Type":"ContainerStarted","Data":"49221ede4f00cbb72c45bdb4a7780438822942f09e8b0ecbd607d6b4303cc07e"} Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.771578 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.773914 4897 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-95h9j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.774021 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" podUID="61f10600-21dd-4043-af69-aa0fdfd246f7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.784189 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.784403 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.284363912 +0000 UTC m=+233.526684569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.784472 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.784832 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.284824837 +0000 UTC m=+233.527145494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.792506 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" podStartSLOduration=176.792488628 podStartE2EDuration="2m56.792488628s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:18.791160525 +0000 UTC m=+233.033481192" watchObservedRunningTime="2026-02-28 13:20:18.792488628 +0000 UTC m=+233.034809285" Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.814429 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7pzqp" podStartSLOduration=176.814399336 podStartE2EDuration="2m56.814399336s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:18.812102811 +0000 UTC m=+233.054423468" watchObservedRunningTime="2026-02-28 13:20:18.814399336 +0000 UTC m=+233.056719993" Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.885695 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.885887 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.385860397 +0000 UTC m=+233.628181054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.885962 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.886368 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.386349433 +0000 UTC m=+233.628670090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.988255 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.988461 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.488419347 +0000 UTC m=+233.730740014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:18 crc kubenswrapper[4897]: I0228 13:20:18.988523 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:18 crc kubenswrapper[4897]: E0228 13:20:18.988802 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.488790629 +0000 UTC m=+233.731111286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.018961 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:19 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:19 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:19 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.019015 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.089183 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.089428 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.589396645 +0000 UTC m=+233.831717302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.089857 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.090162 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.59014799 +0000 UTC m=+233.832468667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.193730 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.193889 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.693862817 +0000 UTC m=+233.936183464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.194059 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.194424 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.694409375 +0000 UTC m=+233.936730042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.294992 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.295166 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.795128785 +0000 UTC m=+234.037449452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.295227 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.295537 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.795526198 +0000 UTC m=+234.037846855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.397489 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.397759 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.897711946 +0000 UTC m=+234.140032613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.397836 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.398548 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:19.898531292 +0000 UTC m=+234.140851969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.499534 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.500153 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.000129931 +0000 UTC m=+234.242450588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.616147 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.616782 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.116754911 +0000 UTC m=+234.359075568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.718222 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.718466 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.218428132 +0000 UTC m=+234.460748789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.718748 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.719266 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.219256659 +0000 UTC m=+234.461577316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.777225 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" event={"ID":"53e254f6-444a-4fd6-8bda-5af18b9d347c","Type":"ContainerStarted","Data":"6ac1f66fd5757dd43cee9118f91a051f5e21d550eacaf70a93ae6067aaab7569"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.779019 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" event={"ID":"df2319dd-b85c-4542-bf25-8233ecda9d78","Type":"ContainerStarted","Data":"fdf06f55d130bfc9c4d03e3915e743f2826dcc6be97cef2e43cd2c31c70d6030"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.780332 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5xn24" event={"ID":"1f5773fa-4b37-4a12-9102-ea3f54a2dd78","Type":"ContainerStarted","Data":"6adcc03a4bfa4f5829e4b30948a58806cdeafb770f9d172b0243cf62c0d63811"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.781518 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" event={"ID":"9a9c0df9-c002-43ec-bc67-dee3c0862056","Type":"ContainerStarted","Data":"fa638a8c4821a84325ed783349d221323748425c9919b6049a3dfb80716b6d34"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.782573 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" event={"ID":"bc588d9f-c6ff-49ac-a670-f886dfc561fc","Type":"ContainerStarted","Data":"0d4d89300ba1cfe9a1c0f9aba8610f1896fcc3146789634f3609396f4ffe42e1"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.783564 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" event={"ID":"67f6ecb8-e29d-4dd2-844a-d5d347453b6e","Type":"ContainerStarted","Data":"e3c5274a69e5d5b95376b415d45fec9a7733843570abc26fa6b31c012e72320f"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.785065 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" event={"ID":"260888c5-6c67-47a8-9903-a25a8a1c6b7d","Type":"ContainerStarted","Data":"d3cc39832acfd06ffd29f393bbc486e6e6b7e9a4a23b1f1b3dc3732c16209e5a"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.786426 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" event={"ID":"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2","Type":"ContainerStarted","Data":"49dac150b5bd0ec76f93646c141ddedc765575b0693affe3b7daec98be462bd6"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.787673 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" event={"ID":"49308413-0bd0-4aef-8d1b-451b077e6996","Type":"ContainerStarted","Data":"3c0330aaf2e4359c9f4bc78a9aa4cebe6971e8fba2a0fa3b131a4815bce8aad7"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.789007 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" event={"ID":"13ffe813-5e11-41e0-9426-8771f8b2ce0b","Type":"ContainerStarted","Data":"e22b1ad47300f7e4af7efba7f283fd2aac4bad2fc8cffbaf9583fa730cbd848a"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.790278 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" event={"ID":"ac747027-4a87-46fc-87e2-fca7e049f863","Type":"ContainerStarted","Data":"f0f12e44715f136d66a937ace3bc324a35d5861d4de0a4b8365c77b5dec0b280"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.791395 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" event={"ID":"9ca4767a-66de-4596-a4ec-4929fb1bb3d5","Type":"ContainerStarted","Data":"1d36a07b24782a694107eaa61d33714d0e9ad16016918418e51865848df4b701"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.792831 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkdfw" event={"ID":"62082696-f7f4-4928-8dd0-07604c22e511","Type":"ContainerStarted","Data":"6fa4ec5687981c710989d42fedcd517fceb3d71f90092da0cb0c9b45cc86dab7"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.794595 4897 generic.go:334] "Generic (PLEG): container finished" podID="9a147d2f-de25-4ba1-8858-392c56b60a20" containerID="a7fa21b7d9339a976586f7aedb29489b53b3f9f122633688882d8e4cecd897a5" exitCode=0 Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.794649 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" event={"ID":"9a147d2f-de25-4ba1-8858-392c56b60a20","Type":"ContainerDied","Data":"a7fa21b7d9339a976586f7aedb29489b53b3f9f122633688882d8e4cecd897a5"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.798544 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" event={"ID":"4ab51603-514f-4ade-8bf2-6281d27a579f","Type":"ContainerStarted","Data":"342fdccb687d885f89a85e0b0c8dba03a57359d3f1892505179c90ed3a3f8092"} Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.800011 4897 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-95h9j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.800062 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" podUID="61f10600-21dd-4043-af69-aa0fdfd246f7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.819927 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.820068 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.320051041 +0000 UTC m=+234.562371688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.820234 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.820632 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.32062028 +0000 UTC m=+234.562940947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.841906 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6wd9" podStartSLOduration=177.841875546 podStartE2EDuration="2m57.841875546s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:19.841015928 +0000 UTC m=+234.083336595" watchObservedRunningTime="2026-02-28 13:20:19.841875546 +0000 UTC m=+234.084196203" Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.859487 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wgmqx" podStartSLOduration=177.859471423 podStartE2EDuration="2m57.859471423s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:19.85754571 +0000 UTC m=+234.099866367" watchObservedRunningTime="2026-02-28 13:20:19.859471423 +0000 UTC m=+234.101792080" Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.885753 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:19 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:19 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:19 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.885796 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.911384 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" podStartSLOduration=176.911367973 podStartE2EDuration="2m56.911367973s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:19.881910688 +0000 UTC m=+234.124231335" watchObservedRunningTime="2026-02-28 13:20:19.911367973 +0000 UTC m=+234.153688630" Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.912603 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzgt9" podStartSLOduration=177.912597993 podStartE2EDuration="2m57.912597993s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:19.910496064 +0000 UTC m=+234.152816711" watchObservedRunningTime="2026-02-28 13:20:19.912597993 +0000 UTC m=+234.154918650" Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.921135 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.921428 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.421385541 +0000 UTC m=+234.663706198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.921537 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:19 crc kubenswrapper[4897]: E0228 13:20:19.923210 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.423202271 +0000 UTC m=+234.665522928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:19 crc kubenswrapper[4897]: I0228 13:20:19.929372 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fb8mb" podStartSLOduration=177.92928978 podStartE2EDuration="2m57.92928978s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:19.929115804 +0000 UTC m=+234.171436461" watchObservedRunningTime="2026-02-28 13:20:19.92928978 +0000 UTC m=+234.171610437" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.023301 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.023447 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.523428104 +0000 UTC m=+234.765748761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.024604 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.024918 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.524899662 +0000 UTC m=+234.767220319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.126324 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.126536 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.626508391 +0000 UTC m=+234.868829048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.126691 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.127016 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.627002877 +0000 UTC m=+234.869323534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.227519 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.227935 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.727910233 +0000 UTC m=+234.970230900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.329758 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.330186 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.830167453 +0000 UTC m=+235.072488120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.430385 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.430639 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.930609924 +0000 UTC m=+235.172930601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.430831 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.431131 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:20.93111836 +0000 UTC m=+235.173439007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.532390 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.532589 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.032550543 +0000 UTC m=+235.274871240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.532741 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.533305 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.033288747 +0000 UTC m=+235.275609444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.634954 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.635271 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.135227367 +0000 UTC m=+235.377548064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.737649 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.738149 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.238128818 +0000 UTC m=+235.480449475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.803824 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" event={"ID":"ae9a4771-065d-4f75-8d15-0ea8525cbaf4","Type":"ContainerStarted","Data":"0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c"} Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.805292 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6l67l" event={"ID":"4c979449-b6ad-40e0-b3d6-584861b1d143","Type":"ContainerStarted","Data":"cd67a0f6de2ce0ab83d00f1b12ca77fe37778ba5238a2101a7a32e97ed1f2f2b"} Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.806864 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" event={"ID":"6a6fd805-dce5-4ee6-82e0-9fce53deed7f","Type":"ContainerStarted","Data":"b83c93f1a6f8c692b793ad254ae1a1fd05d0b6483add5e0973c797a38b54699b"} Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.808383 4897 generic.go:334] "Generic (PLEG): container finished" podID="4ab51603-514f-4ade-8bf2-6281d27a579f" containerID="342fdccb687d885f89a85e0b0c8dba03a57359d3f1892505179c90ed3a3f8092" exitCode=0 Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.808443 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" event={"ID":"4ab51603-514f-4ade-8bf2-6281d27a579f","Type":"ContainerDied","Data":"342fdccb687d885f89a85e0b0c8dba03a57359d3f1892505179c90ed3a3f8092"} Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.812200 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" event={"ID":"49d0a669-bb05-4da5-9e58-789b58c0797b","Type":"ContainerStarted","Data":"0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d"} Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.815221 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" event={"ID":"24e813e9-4e63-45b2-924b-7fee90b8a3ed","Type":"ContainerStarted","Data":"0650312c0b2ff67a0d35868f31e8815695aab587db4e0a7cf21ec8c12495e795"} Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.816902 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.817340 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.824038 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l7m8v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.824093 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.824502 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6st7l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.824590 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" podUID="3d1d3880-3411-45f5-8835-a4db59c38cfe" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.839415 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.839794 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.339774397 +0000 UTC m=+235.582095074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.846511 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" podStartSLOduration=177.846480987 podStartE2EDuration="2m57.846480987s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:20.843877541 +0000 UTC m=+235.086198228" watchObservedRunningTime="2026-02-28 13:20:20.846480987 +0000 UTC m=+235.088801664" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.884011 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:20 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:20 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:20 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.884133 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.890036 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7wzmt" podStartSLOduration=177.890009553 podStartE2EDuration="2m57.890009553s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:20.865247701 +0000 UTC m=+235.107568378" watchObservedRunningTime="2026-02-28 13:20:20.890009553 +0000 UTC m=+235.132330210" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.933628 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-glzrp" podStartSLOduration=177.933608771 podStartE2EDuration="2m57.933608771s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:20.88872334 +0000 UTC m=+235.131044007" watchObservedRunningTime="2026-02-28 13:20:20.933608771 +0000 UTC m=+235.175929428" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.941281 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:20 crc kubenswrapper[4897]: E0228 13:20:20.942523 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.442511253 +0000 UTC m=+235.684831910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.962680 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" podStartSLOduration=178.962651122 podStartE2EDuration="2m58.962651122s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:20.956890874 +0000 UTC m=+235.199211531" watchObservedRunningTime="2026-02-28 13:20:20.962651122 +0000 UTC m=+235.204971779" Feb 28 13:20:20 crc kubenswrapper[4897]: I0228 13:20:20.978053 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2f58z" podStartSLOduration=178.978036186 podStartE2EDuration="2m58.978036186s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:20.977291622 +0000 UTC m=+235.219612269" watchObservedRunningTime="2026-02-28 13:20:20.978036186 +0000 UTC m=+235.220356843" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.027194 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l5dbb" podStartSLOduration=179.027157716 podStartE2EDuration="2m59.027157716s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.017700506 +0000 UTC m=+235.260021163" watchObservedRunningTime="2026-02-28 13:20:21.027157716 +0000 UTC m=+235.269478373" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.042178 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.042790 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.542771437 +0000 UTC m=+235.785092094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.071787 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" podStartSLOduration=178.071765687 podStartE2EDuration="2m58.071765687s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.070076032 +0000 UTC m=+235.312396689" watchObservedRunningTime="2026-02-28 13:20:21.071765687 +0000 UTC m=+235.314086344" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.076837 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kkdfw" podStartSLOduration=10.076817953 podStartE2EDuration="10.076817953s" podCreationTimestamp="2026-02-28 13:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.045791216 +0000 UTC m=+235.288111873" watchObservedRunningTime="2026-02-28 13:20:21.076817953 +0000 UTC m=+235.319138610" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.144165 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.144509 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.64449816 +0000 UTC m=+235.886818817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.209194 4897 ???:1] "http: TLS handshake error from 192.168.126.11:60360: no serving certificate available for the kubelet" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.245554 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.245974 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.745937003 +0000 UTC m=+235.988257690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.246651 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.247191 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.747175703 +0000 UTC m=+235.989496360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.303807 4897 ???:1] "http: TLS handshake error from 192.168.126.11:60376: no serving certificate available for the kubelet" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.347695 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.347913 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.847874472 +0000 UTC m=+236.090195129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.347964 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.348370 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.848353398 +0000 UTC m=+236.090674055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.404168 4897 ???:1] "http: TLS handshake error from 192.168.126.11:60378: no serving certificate available for the kubelet" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.449567 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.449765 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.949735039 +0000 UTC m=+236.192055696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.451673 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.452070 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:21.952060486 +0000 UTC m=+236.194381143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.469855 4897 ???:1] "http: TLS handshake error from 192.168.126.11:60384: no serving certificate available for the kubelet" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.552490 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.552761 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.052733354 +0000 UTC m=+236.295054011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.609865 4897 ???:1] "http: TLS handshake error from 192.168.126.11:41134: no serving certificate available for the kubelet" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.654937 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.655171 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.155158839 +0000 UTC m=+236.397479496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.691769 4897 ???:1] "http: TLS handshake error from 192.168.126.11:41136: no serving certificate available for the kubelet" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.755688 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.755875 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.255849388 +0000 UTC m=+236.498170045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.755924 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.756217 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.256206099 +0000 UTC m=+236.498526756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.766491 4897 ???:1] "http: TLS handshake error from 192.168.126.11:41150: no serving certificate available for the kubelet" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.824387 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" event={"ID":"13ffe813-5e11-41e0-9426-8771f8b2ce0b","Type":"ContainerStarted","Data":"c924164b20dc678c1318a6c60b1cc794a484bb4ffb60d57e615ab6a19d120cd2"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.843723 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bxsf4" podStartSLOduration=179.843704656 podStartE2EDuration="2m59.843704656s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.840757339 +0000 UTC m=+236.083078006" watchObservedRunningTime="2026-02-28 13:20:21.843704656 +0000 UTC m=+236.086025313" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.844015 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" event={"ID":"df2319dd-b85c-4542-bf25-8233ecda9d78","Type":"ContainerStarted","Data":"ee6dd8e00bbdfa3eee719aa2f060ae799ba2ea41144c4227e2f68c3ce817cae8"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.853301 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" event={"ID":"4ab51603-514f-4ade-8bf2-6281d27a579f","Type":"ContainerStarted","Data":"815a75c5a8b495fab6638ea0648d4b9068d07a77d9b324ae26242cfe6d40a39e"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.853371 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.858435 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.858626 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.358599244 +0000 UTC m=+236.600919901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.865262 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" event={"ID":"5a185ef3-d57d-4925-b2d6-6de53cf0d0f2","Type":"ContainerStarted","Data":"e60a6c895d3131e36739462c3166817c3a2f998260ed233ac143346f9e037adb"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.866448 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zkvs9" podStartSLOduration=178.866438421 podStartE2EDuration="2m58.866438421s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.866254045 +0000 UTC m=+236.108574712" watchObservedRunningTime="2026-02-28 13:20:21.866438421 +0000 UTC m=+236.108759078" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.891208 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:21 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:21 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:21 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.891271 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.893816 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" podStartSLOduration=179.893797757 podStartE2EDuration="2m59.893797757s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.892400481 +0000 UTC m=+236.134721138" watchObservedRunningTime="2026-02-28 13:20:21.893797757 +0000 UTC m=+236.136118404" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.899713 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" event={"ID":"26da04fb-0109-4f7f-a283-f489e9b4596f","Type":"ContainerStarted","Data":"5a7dd6f63be3c05712c4e7384db6c96941cfb1816a7f6401673ea28529f63b7b"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.911998 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" event={"ID":"24e813e9-4e63-45b2-924b-7fee90b8a3ed","Type":"ContainerStarted","Data":"4503c2540ffa693de0343f80a05a2e476f503d43bdb54138010ef9e99284eb71"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.916416 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" event={"ID":"9a147d2f-de25-4ba1-8858-392c56b60a20","Type":"ContainerStarted","Data":"0c1e4f796a91c4275f8145bd4f66d857d369e37ca9a5885ecfa072bf2c2614d3"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.921031 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xfrkv" podStartSLOduration=179.921017639 podStartE2EDuration="2m59.921017639s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.920584474 +0000 UTC m=+236.162905141" watchObservedRunningTime="2026-02-28 13:20:21.921017639 +0000 UTC m=+236.163338296" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.926461 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" event={"ID":"f6a4bf66-c081-492e-aa28-f9245e7ffe3c","Type":"ContainerStarted","Data":"21b8ba0931c0b1a5d337fcc3f613fda985140a1792a92cf652ae096504f24b83"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.933723 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6l67l" event={"ID":"4c979449-b6ad-40e0-b3d6-584861b1d143","Type":"ContainerStarted","Data":"239d16dd34adf6417b0816079209462aca5c9f477f8fccbe3c0c72ddda3a37b2"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.934730 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.955259 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" event={"ID":"9ca4767a-66de-4596-a4ec-4929fb1bb3d5","Type":"ContainerStarted","Data":"d1594f9657dad50f527e53108415b61d52affa4c0bbe9a2e079325a889b798bb"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.959977 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:21 crc kubenswrapper[4897]: E0228 13:20:21.960640 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.460623066 +0000 UTC m=+236.702943723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.964541 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" event={"ID":"260888c5-6c67-47a8-9903-a25a8a1c6b7d","Type":"ContainerStarted","Data":"e89d0093d98bb5e8975a21e17bd5eded30c94ffc8162c19b08b341a5ef3d51a2"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.983026 4897 ???:1] "http: TLS handshake error from 192.168.126.11:41154: no serving certificate available for the kubelet" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.996479 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" event={"ID":"4ba075a8-61d1-4147-80ea-03906930ff87","Type":"ContainerStarted","Data":"dacc0c8c2f254d267c2dbc9256e6c5d69ff3603414cc311866f5c11095b94339"} Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.997767 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l7m8v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.997848 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 28 13:20:21 crc kubenswrapper[4897]: I0228 13:20:21.998918 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.002893 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" podStartSLOduration=179.00286795 podStartE2EDuration="2m59.00286795s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.980637122 +0000 UTC m=+236.222957789" watchObservedRunningTime="2026-02-28 13:20:22.00286795 +0000 UTC m=+236.245188607" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.003509 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lb9zj" podStartSLOduration=180.003501541 podStartE2EDuration="3m0.003501541s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:21.999737908 +0000 UTC m=+236.242058565" watchObservedRunningTime="2026-02-28 13:20:22.003501541 +0000 UTC m=+236.245822198" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.017786 4897 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-84hkx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.017859 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.029638 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nx8lb" podStartSLOduration=180.029619637 podStartE2EDuration="3m0.029619637s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.028155519 +0000 UTC m=+236.270476186" watchObservedRunningTime="2026-02-28 13:20:22.029619637 +0000 UTC m=+236.271940294" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.053865 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" podStartSLOduration=179.05384415 podStartE2EDuration="2m59.05384415s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.053424596 +0000 UTC m=+236.295745273" watchObservedRunningTime="2026-02-28 13:20:22.05384415 +0000 UTC m=+236.296164807" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.063141 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.064917 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.564891212 +0000 UTC m=+236.807211869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.079359 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-nsbjk" podStartSLOduration=180.079343846 podStartE2EDuration="3m0.079343846s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.077440393 +0000 UTC m=+236.319761060" watchObservedRunningTime="2026-02-28 13:20:22.079343846 +0000 UTC m=+236.321664503" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.145327 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rr6bc" podStartSLOduration=179.145296556 podStartE2EDuration="2m59.145296556s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.105505643 +0000 UTC m=+236.347826320" watchObservedRunningTime="2026-02-28 13:20:22.145296556 +0000 UTC m=+236.387617213" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.169886 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.180938 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.680916713 +0000 UTC m=+236.923237370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.206165 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-td8r5" podStartSLOduration=179.20614605 podStartE2EDuration="2m59.20614605s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.195235202 +0000 UTC m=+236.437555869" watchObservedRunningTime="2026-02-28 13:20:22.20614605 +0000 UTC m=+236.448466707" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.207379 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-25vlq" podStartSLOduration=180.20737107 podStartE2EDuration="3m0.20737107s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.160941409 +0000 UTC m=+236.403262076" watchObservedRunningTime="2026-02-28 13:20:22.20737107 +0000 UTC m=+236.449691727" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.227566 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6l67l" podStartSLOduration=11.227539801 podStartE2EDuration="11.227539801s" podCreationTimestamp="2026-02-28 13:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.22202634 +0000 UTC m=+236.464346997" watchObservedRunningTime="2026-02-28 13:20:22.227539801 +0000 UTC m=+236.469860458" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.273722 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.274087 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.774071845 +0000 UTC m=+237.016392502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.292001 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" podStartSLOduration=180.291984212 podStartE2EDuration="3m0.291984212s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.291410893 +0000 UTC m=+236.533731560" watchObservedRunningTime="2026-02-28 13:20:22.291984212 +0000 UTC m=+236.534304869" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.313768 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-z4hrw" podStartSLOduration=180.313753525 podStartE2EDuration="3m0.313753525s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.312296467 +0000 UTC m=+236.554617134" watchObservedRunningTime="2026-02-28 13:20:22.313753525 +0000 UTC m=+236.556074182" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.360197 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" podStartSLOduration=179.360180116 podStartE2EDuration="2m59.360180116s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.349671172 +0000 UTC m=+236.591991829" watchObservedRunningTime="2026-02-28 13:20:22.360180116 +0000 UTC m=+236.602500773" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.378258 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.378775 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.878762625 +0000 UTC m=+237.121083282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.397634 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5xn24" podStartSLOduration=11.397619252 podStartE2EDuration="11.397619252s" podCreationTimestamp="2026-02-28 13:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.394949305 +0000 UTC m=+236.637269962" watchObservedRunningTime="2026-02-28 13:20:22.397619252 +0000 UTC m=+236.639939909" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.479367 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.479717 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:22.979694021 +0000 UTC m=+237.222014688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.513998 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" podStartSLOduration=179.513982074 podStartE2EDuration="2m59.513982074s" podCreationTimestamp="2026-02-28 13:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.454797225 +0000 UTC m=+236.697117902" watchObservedRunningTime="2026-02-28 13:20:22.513982074 +0000 UTC m=+236.756302731" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.514398 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ccsdd" podStartSLOduration=180.514394298 podStartE2EDuration="3m0.514394298s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:22.51049419 +0000 UTC m=+236.752814847" watchObservedRunningTime="2026-02-28 13:20:22.514394298 +0000 UTC m=+236.756714955" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.580412 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.580711 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.08069913 +0000 UTC m=+237.323019787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.681178 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.681375 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.181350357 +0000 UTC m=+237.423671014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.681457 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.681744 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.18173602 +0000 UTC m=+237.424056667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.692802 4897 ???:1] "http: TLS handshake error from 192.168.126.11:41164: no serving certificate available for the kubelet" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.782247 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.782481 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.2824549 +0000 UTC m=+237.524775557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.782717 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.783013 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.283003398 +0000 UTC m=+237.525324065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.881522 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:22 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:22 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:22 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.881581 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.884023 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.884173 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.384150091 +0000 UTC m=+237.626470748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.884436 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.884820 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.384803693 +0000 UTC m=+237.627124350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.986143 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:22 crc kubenswrapper[4897]: E0228 13:20:22.986733 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.486716961 +0000 UTC m=+237.729037608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.997648 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6st7l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 13:20:22 crc kubenswrapper[4897]: I0228 13:20:22.997734 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" podUID="3d1d3880-3411-45f5-8835-a4db59c38cfe" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.015383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" event={"ID":"9a147d2f-de25-4ba1-8858-392c56b60a20","Type":"ContainerStarted","Data":"f3e64db2cd17342e98d3dbf8f338e29e2f47638693d1638dd978b5432ad6f5fb"} Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.029017 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" event={"ID":"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf","Type":"ContainerStarted","Data":"7bd00929a1cb60db4499de2a1df8e16d63a04b612c7930d6d1dd37a9a50b3c14"} Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.030944 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.054536 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" podStartSLOduration=181.054521913 podStartE2EDuration="3m1.054521913s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:23.04925198 +0000 UTC m=+237.291572637" watchObservedRunningTime="2026-02-28 13:20:23.054521913 +0000 UTC m=+237.296842570" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.088047 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.088687 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.588669991 +0000 UTC m=+237.830990658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.189818 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.194971 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.694950923 +0000 UTC m=+237.937271580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.291133 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.291655 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.791642661 +0000 UTC m=+238.033963318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.392377 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.392559 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.892530856 +0000 UTC m=+238.134851513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.392868 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.393205 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.893193818 +0000 UTC m=+238.135514475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.435800 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.436147 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.441542 4897 patch_prober.go:28] interesting pod/console-f9d7485db-rd9tl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.441585 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rd9tl" podUID="3423cf07-c57b-41f3-82da-f497649699db" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.447488 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.448197 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:23 crc kubenswrapper[4897]: W0228 13:20:23.449560 4897 reflector.go:561] object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n": failed to list *v1.Secret: secrets "installer-sa-dockercfg-kjl2n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager": no relationship found between node 'crc' and this object Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.449597 4897 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"installer-sa-dockercfg-kjl2n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"installer-sa-dockercfg-kjl2n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 28 13:20:23 crc kubenswrapper[4897]: W0228 13:20:23.452085 4897 reflector.go:561] object-"openshift-kube-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager": no relationship found between node 'crc' and this object Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.452139 4897 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.493954 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.494056 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.994038961 +0000 UTC m=+238.236359608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.494253 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.494548 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:23.994541268 +0000 UTC m=+238.236861925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.512883 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.581625 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-fv2rz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.581928 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fv2rz" podUID="4f2510cb-e89f-49a0-b5cd-aca1a5c51178" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.581713 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-fv2rz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.582159 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fv2rz" podUID="4f2510cb-e89f-49a0-b5cd-aca1a5c51178" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.588542 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.596530 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.596677 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.096652983 +0000 UTC m=+238.338973640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.597056 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/555fd921-4e06-4a2b-b800-744d83d5caf1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"555fd921-4e06-4a2b-b800-744d83d5caf1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.597171 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/555fd921-4e06-4a2b-b800-744d83d5caf1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"555fd921-4e06-4a2b-b800-744d83d5caf1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.597327 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.599291 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.099275479 +0000 UTC m=+238.341596136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.612903 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.613210 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.689088 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fq58q" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.698023 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.698279 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/555fd921-4e06-4a2b-b800-744d83d5caf1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"555fd921-4e06-4a2b-b800-744d83d5caf1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.698387 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/555fd921-4e06-4a2b-b800-744d83d5caf1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"555fd921-4e06-4a2b-b800-744d83d5caf1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.699647 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.199632357 +0000 UTC m=+238.441953004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.701624 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/555fd921-4e06-4a2b-b800-744d83d5caf1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"555fd921-4e06-4a2b-b800-744d83d5caf1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.809025 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.810434 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.310422876 +0000 UTC m=+238.552743523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.884367 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bfpj4"] Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.886033 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.890006 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.890612 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.898336 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.905509 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:23 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:23 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:23 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.905565 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.909957 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.910117 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.410085161 +0000 UTC m=+238.652405818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.910451 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:23 crc kubenswrapper[4897]: E0228 13:20:23.910917 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.410900288 +0000 UTC m=+238.653220945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.924809 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfpj4"] Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.985594 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:23 crc kubenswrapper[4897]: I0228 13:20:23.999836 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-blf86" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.024191 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.024368 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-utilities\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.024471 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94tpx\" (UniqueName: \"kubernetes.io/projected/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-kube-api-access-94tpx\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.024502 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-catalog-content\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.025390 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.525374878 +0000 UTC m=+238.767695535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.025551 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q9d2n"] Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.026465 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.029839 4897 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-84hkx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.029884 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.037496 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.044489 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q9d2n"] Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.083324 4897 ???:1] "http: TLS handshake error from 192.168.126.11:41166: no serving certificate available for the kubelet" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.084067 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" event={"ID":"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf","Type":"ContainerStarted","Data":"3b35b60cf278b7c49b4bbd9ad741d9b6563285ac935b8d14dc9bccb08a1cff0e"} Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.103461 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2ztc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.125675 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-catalog-content\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.125762 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.125798 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-utilities\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.125874 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94tpx\" (UniqueName: \"kubernetes.io/projected/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-kube-api-access-94tpx\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.127765 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.627750412 +0000 UTC m=+238.870071069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.128234 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-utilities\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.128516 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-catalog-content\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.153019 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6st7l" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.175888 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94tpx\" (UniqueName: \"kubernetes.io/projected/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-kube-api-access-94tpx\") pod \"community-operators-bfpj4\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.203387 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sv5dr"] Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.204286 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.205520 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.226910 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.227007 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.726986793 +0000 UTC m=+238.969307450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.227179 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-utilities\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.227335 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcgzv\" (UniqueName: \"kubernetes.io/projected/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-kube-api-access-bcgzv\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.227383 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.227521 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-catalog-content\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.229733 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.729720492 +0000 UTC m=+238.972041149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.238054 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sv5dr"] Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.241685 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.245769 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.245992 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.337713 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.337935 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcgzv\" (UniqueName: \"kubernetes.io/projected/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-kube-api-access-bcgzv\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.338032 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbsfd\" (UniqueName: \"kubernetes.io/projected/c8e82c23-54f4-43a4-904b-4f90348580ac-kube-api-access-xbsfd\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.338075 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-catalog-content\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.338121 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-utilities\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.338140 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-utilities\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.338173 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-catalog-content\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.338292 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.838270488 +0000 UTC m=+239.080591145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.339027 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-catalog-content\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.339464 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-utilities\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.348696 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.372848 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.385700 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.386683 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.389792 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.390056 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.398347 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcgzv\" (UniqueName: \"kubernetes.io/projected/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-kube-api-access-bcgzv\") pod \"certified-operators-q9d2n\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.420802 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.441277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02ffda6c-19d5-465a-8db7-d094fb1590b8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02ffda6c-19d5-465a-8db7-d094fb1590b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.441332 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbsfd\" (UniqueName: \"kubernetes.io/projected/c8e82c23-54f4-43a4-904b-4f90348580ac-kube-api-access-xbsfd\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.441380 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-utilities\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.441434 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-catalog-content\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.441462 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02ffda6c-19d5-465a-8db7-d094fb1590b8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02ffda6c-19d5-465a-8db7-d094fb1590b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.441493 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.443570 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:24.943554707 +0000 UTC m=+239.185875354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.443818 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-catalog-content\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.444183 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-utilities\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.502188 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.502526 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-whbtd"] Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.506403 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whbtd"] Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.506618 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.508769 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.522143 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbsfd\" (UniqueName: \"kubernetes.io/projected/c8e82c23-54f4-43a4-904b-4f90348580ac-kube-api-access-xbsfd\") pod \"community-operators-sv5dr\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.542945 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.543400 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.043379457 +0000 UTC m=+239.285700114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.544076 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02ffda6c-19d5-465a-8db7-d094fb1590b8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02ffda6c-19d5-465a-8db7-d094fb1590b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.544277 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02ffda6c-19d5-465a-8db7-d094fb1590b8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02ffda6c-19d5-465a-8db7-d094fb1590b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.544328 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.544619 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.044610707 +0000 UTC m=+239.286931364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.551115 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02ffda6c-19d5-465a-8db7-d094fb1590b8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02ffda6c-19d5-465a-8db7-d094fb1590b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.567300 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.570775 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.596820 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jbqv4" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.621738 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.639019 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02ffda6c-19d5-465a-8db7-d094fb1590b8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02ffda6c-19d5-465a-8db7-d094fb1590b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.650764 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.651257 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.651534 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vmpb\" (UniqueName: \"kubernetes.io/projected/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-kube-api-access-9vmpb\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.651566 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-catalog-content\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.651641 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-utilities\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.652018 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.151999155 +0000 UTC m=+239.394319812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.682129 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/555fd921-4e06-4a2b-b800-744d83d5caf1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"555fd921-4e06-4a2b-b800-744d83d5caf1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.745517 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.754578 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.754873 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.254860365 +0000 UTC m=+239.497181022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.758270 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-utilities\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.758331 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vmpb\" (UniqueName: \"kubernetes.io/projected/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-kube-api-access-9vmpb\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.758374 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-catalog-content\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.758840 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-catalog-content\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.764427 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-utilities\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.785297 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vmpb\" (UniqueName: \"kubernetes.io/projected/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-kube-api-access-9vmpb\") pod \"certified-operators-whbtd\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.824601 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.871953 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.872354 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.372338174 +0000 UTC m=+239.614658821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.900996 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:24 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:24 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:24 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.901043 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.972178 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:24 crc kubenswrapper[4897]: I0228 13:20:24.973486 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:24 crc kubenswrapper[4897]: E0228 13:20:24.973925 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.473906841 +0000 UTC m=+239.716227498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.032894 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-95h9j"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.033100 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" podUID="61f10600-21dd-4043-af69-aa0fdfd246f7" containerName="controller-manager" containerID="cri-o://db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc" gracePeriod=30 Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.077851 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.078171 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.578155976 +0000 UTC m=+239.820476633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.169376 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" event={"ID":"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf","Type":"ContainerStarted","Data":"73f3c77b7848e84d0f46eafded24f2753982b73c770efa62dadf125fcd604ff0"} Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.180194 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.187277 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.687260041 +0000 UTC m=+239.929580698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.221958 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.227978 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfpj4"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.254561 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sv5dr"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.267543 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q9d2n"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.285613 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.285895 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.785881501 +0000 UTC m=+240.028202158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.362860 4897 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.386691 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.387008 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.886996264 +0000 UTC m=+240.129316921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.409275 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.417544 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.450836 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whbtd"] Feb 28 13:20:25 crc kubenswrapper[4897]: W0228 13:20:25.485028 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa5ec60f_f348_43b8_8ef2_9caafd08cb0d.slice/crio-925fd28ec78fc1e178167cff0f0fe566e81a460fc690648155f45905fa143b7b WatchSource:0}: Error finding container 925fd28ec78fc1e178167cff0f0fe566e81a460fc690648155f45905fa143b7b: Status 404 returned error can't find the container with id 925fd28ec78fc1e178167cff0f0fe566e81a460fc690648155f45905fa143b7b Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.487959 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.488113 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.988097676 +0000 UTC m=+240.230418333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.488178 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.488485 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:25.988478629 +0000 UTC m=+240.230799286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.589259 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.589392 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:26.089373164 +0000 UTC m=+240.331693821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.589433 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.589903 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:26.089896681 +0000 UTC m=+240.332217338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.690191 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.691029 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:26.190824637 +0000 UTC m=+240.433145294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.691062 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.691333 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:26.191325914 +0000 UTC m=+240.433646571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.725759 4897 patch_prober.go:28] interesting pod/apiserver-76f77b778f-t87x8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]log ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]etcd ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/generic-apiserver-start-informers ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/max-in-flight-filter ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 28 13:20:25 crc kubenswrapper[4897]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 28 13:20:25 crc kubenswrapper[4897]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/project.openshift.io-projectcache ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-startinformers ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 28 13:20:25 crc kubenswrapper[4897]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 28 13:20:25 crc kubenswrapper[4897]: livez check failed Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.725821 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" podUID="9a147d2f-de25-4ba1-8858-392c56b60a20" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.774281 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4slc"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.775371 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.777268 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.785021 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4slc"] Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.791825 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.791998 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:26.291982211 +0000 UTC m=+240.534302868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.792167 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-catalog-content\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.792211 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbzh\" (UniqueName: \"kubernetes.io/projected/34293634-5315-4dac-94b9-258b99c8a9c1-kube-api-access-dzbzh\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.792234 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-utilities\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.792295 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.792556 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:26.29254398 +0000 UTC m=+240.534864637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.881291 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:25 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:25 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:25 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.881542 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.892959 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.893019 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 13:20:26.393007321 +0000 UTC m=+240.635327978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.893323 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzbzh\" (UniqueName: \"kubernetes.io/projected/34293634-5315-4dac-94b9-258b99c8a9c1-kube-api-access-dzbzh\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.893352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-utilities\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.893402 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.893428 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-catalog-content\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: E0228 13:20:25.893897 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 13:20:26.39388941 +0000 UTC m=+240.636210067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k72ms" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.893899 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-utilities\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.894193 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-catalog-content\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.895076 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.914141 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzbzh\" (UniqueName: \"kubernetes.io/projected/34293634-5315-4dac-94b9-258b99c8a9c1-kube-api-access-dzbzh\") pod \"redhat-marketplace-j4slc\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.972635 4897 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-28T13:20:25.362886494Z","Handler":null,"Name":""} Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.975624 4897 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.975664 4897 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.994936 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.994982 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-proxy-ca-bundles\") pod \"61f10600-21dd-4043-af69-aa0fdfd246f7\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " Feb 28 13:20:25 crc kubenswrapper[4897]: I0228 13:20:25.995807 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "61f10600-21dd-4043-af69-aa0fdfd246f7" (UID: "61f10600-21dd-4043-af69-aa0fdfd246f7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.003440 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.096391 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-client-ca\") pod \"61f10600-21dd-4043-af69-aa0fdfd246f7\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.096497 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sldxp\" (UniqueName: \"kubernetes.io/projected/61f10600-21dd-4043-af69-aa0fdfd246f7-kube-api-access-sldxp\") pod \"61f10600-21dd-4043-af69-aa0fdfd246f7\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.096530 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-config\") pod \"61f10600-21dd-4043-af69-aa0fdfd246f7\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.096587 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61f10600-21dd-4043-af69-aa0fdfd246f7-serving-cert\") pod \"61f10600-21dd-4043-af69-aa0fdfd246f7\" (UID: \"61f10600-21dd-4043-af69-aa0fdfd246f7\") " Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.096863 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.096920 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.096949 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "61f10600-21dd-4043-af69-aa0fdfd246f7" (UID: "61f10600-21dd-4043-af69-aa0fdfd246f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.097492 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-config" (OuterVolumeSpecName: "config") pod "61f10600-21dd-4043-af69-aa0fdfd246f7" (UID: "61f10600-21dd-4043-af69-aa0fdfd246f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.102249 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.102285 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.106648 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f10600-21dd-4043-af69-aa0fdfd246f7-kube-api-access-sldxp" (OuterVolumeSpecName: "kube-api-access-sldxp") pod "61f10600-21dd-4043-af69-aa0fdfd246f7" (UID: "61f10600-21dd-4043-af69-aa0fdfd246f7"). InnerVolumeSpecName "kube-api-access-sldxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.110668 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f10600-21dd-4043-af69-aa0fdfd246f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61f10600-21dd-4043-af69-aa0fdfd246f7" (UID: "61f10600-21dd-4043-af69-aa0fdfd246f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.138215 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k72ms\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.176482 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.178409 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-78cjq"] Feb 28 13:20:26 crc kubenswrapper[4897]: E0228 13:20:26.178728 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f10600-21dd-4043-af69-aa0fdfd246f7" containerName="controller-manager" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.178751 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f10600-21dd-4043-af69-aa0fdfd246f7" containerName="controller-manager" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.178868 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f10600-21dd-4043-af69-aa0fdfd246f7" containerName="controller-manager" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.179910 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.194928 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78cjq"] Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.197828 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" event={"ID":"42271fb7-85dd-4e09-b2b9-f78ab6cfcdcf","Type":"ContainerStarted","Data":"fbe61fbe04b67578483794318d699ee60594f024e6aae15fd4553212a6b48478"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.199599 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.199636 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sldxp\" (UniqueName: \"kubernetes.io/projected/61f10600-21dd-4043-af69-aa0fdfd246f7-kube-api-access-sldxp\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.199648 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f10600-21dd-4043-af69-aa0fdfd246f7-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.199658 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61f10600-21dd-4043-af69-aa0fdfd246f7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.226029 4897 generic.go:334] "Generic (PLEG): container finished" podID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerID="dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08" exitCode=0 Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.226095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whbtd" event={"ID":"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d","Type":"ContainerDied","Data":"dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.226122 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whbtd" event={"ID":"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d","Type":"ContainerStarted","Data":"925fd28ec78fc1e178167cff0f0fe566e81a460fc690648155f45905fa143b7b"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.248484 4897 generic.go:334] "Generic (PLEG): container finished" podID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerID="397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa" exitCode=0 Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.248668 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpj4" event={"ID":"c752ba9a-f6f8-4530-91a9-c06ff609e9d8","Type":"ContainerDied","Data":"397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.248721 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpj4" event={"ID":"c752ba9a-f6f8-4530-91a9-c06ff609e9d8","Type":"ContainerStarted","Data":"a6e627d3c5553a6c72a551dab57427969f6d6bb056fba61f1414020ee2a972be"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.263576 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mfx26" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.264251 4897 generic.go:334] "Generic (PLEG): container finished" podID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerID="edb89e8b7a19fbedcfd8a1ba8ccf4cff8d5b11db8f33a0abbff954a46c31e17f" exitCode=0 Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.264414 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9d2n" event={"ID":"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5","Type":"ContainerDied","Data":"edb89e8b7a19fbedcfd8a1ba8ccf4cff8d5b11db8f33a0abbff954a46c31e17f"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.264442 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9d2n" event={"ID":"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5","Type":"ContainerStarted","Data":"99dffa22991c224f4f7f8f25447344f24fa08ff7dffd1e5b80d8352af2ce25ae"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.266428 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-q9cdm" podStartSLOduration=15.266409944 podStartE2EDuration="15.266409944s" podCreationTimestamp="2026-02-28 13:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:26.240380001 +0000 UTC m=+240.482700668" watchObservedRunningTime="2026-02-28 13:20:26.266409944 +0000 UTC m=+240.508730601" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.267156 4897 generic.go:334] "Generic (PLEG): container finished" podID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerID="36640f3ae8151a492ade0fe822ab1701188b3f336300cbaa3d7c76efa95fc78c" exitCode=0 Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.267228 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv5dr" event={"ID":"c8e82c23-54f4-43a4-904b-4f90348580ac","Type":"ContainerDied","Data":"36640f3ae8151a492ade0fe822ab1701188b3f336300cbaa3d7c76efa95fc78c"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.267247 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv5dr" event={"ID":"c8e82c23-54f4-43a4-904b-4f90348580ac","Type":"ContainerStarted","Data":"d656bd125eb0d4bbf50098d41dc9aee50fb27e6402c781ca71c6616742bdc399"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.270000 4897 generic.go:334] "Generic (PLEG): container finished" podID="61f10600-21dd-4043-af69-aa0fdfd246f7" containerID="db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc" exitCode=0 Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.270058 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.270172 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" event={"ID":"61f10600-21dd-4043-af69-aa0fdfd246f7","Type":"ContainerDied","Data":"db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.270215 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-95h9j" event={"ID":"61f10600-21dd-4043-af69-aa0fdfd246f7","Type":"ContainerDied","Data":"17e607d6ad4bf69e0eae2417202bf44971b0e68842c9b95a5b01f8b90d0c98d3"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.270234 4897 scope.go:117] "RemoveContainer" containerID="db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.272222 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"555fd921-4e06-4a2b-b800-744d83d5caf1","Type":"ContainerStarted","Data":"3639bb91afec5432e07458f951a75adcf9eb4e0fc88f8506200500dcb8bfabbb"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.272393 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"555fd921-4e06-4a2b-b800-744d83d5caf1","Type":"ContainerStarted","Data":"43fb8d2fd4e22e466a06d3316636bd75abccfdb78ea3da1dea19707c46f90b92"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.279843 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02ffda6c-19d5-465a-8db7-d094fb1590b8","Type":"ContainerStarted","Data":"abdb484db7e33ba4860907e3c29d6faac882ce62e597f97339c310e001b9b1dd"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.280011 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02ffda6c-19d5-465a-8db7-d094fb1590b8","Type":"ContainerStarted","Data":"21b02eba369ff60862106cac5e653b95a141f48643d7041f29b024f04b1355b3"} Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.280718 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" podUID="ae9a4771-065d-4f75-8d15-0ea8525cbaf4" containerName="route-controller-manager" containerID="cri-o://0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c" gracePeriod=30 Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.298243 4897 scope.go:117] "RemoveContainer" containerID="db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc" Feb 28 13:20:26 crc kubenswrapper[4897]: E0228 13:20:26.299356 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc\": container with ID starting with db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc not found: ID does not exist" containerID="db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.299397 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc"} err="failed to get container status \"db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc\": rpc error: code = NotFound desc = could not find container \"db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc\": container with ID starting with db1122182a56cae33ce37ab1ea5d443e22edae836c247c416ae5a23cfaf9bdfc not found: ID does not exist" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.300781 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-utilities\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.300842 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-catalog-content\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.300867 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7cv6\" (UniqueName: \"kubernetes.io/projected/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-kube-api-access-l7cv6\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.339103 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.339083235 podStartE2EDuration="3.339083235s" podCreationTimestamp="2026-02-28 13:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:26.33893773 +0000 UTC m=+240.581258387" watchObservedRunningTime="2026-02-28 13:20:26.339083235 +0000 UTC m=+240.581403892" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.377217 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.377197933 podStartE2EDuration="2.377197933s" podCreationTimestamp="2026-02-28 13:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:26.375288951 +0000 UTC m=+240.617609608" watchObservedRunningTime="2026-02-28 13:20:26.377197933 +0000 UTC m=+240.619518590" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.399982 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.402779 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-utilities\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.402824 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7cv6\" (UniqueName: \"kubernetes.io/projected/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-kube-api-access-l7cv6\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.402849 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-catalog-content\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.408325 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-95h9j"] Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.408671 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-utilities\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.408902 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-catalog-content\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.410339 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-95h9j"] Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.410749 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.438886 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7cv6\" (UniqueName: \"kubernetes.io/projected/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-kube-api-access-l7cv6\") pod \"redhat-marketplace-78cjq\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.494494 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f10600-21dd-4043-af69-aa0fdfd246f7" path="/var/lib/kubelet/pods/61f10600-21dd-4043-af69-aa0fdfd246f7/volumes" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.495512 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.496263 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4slc"] Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.518877 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.654989 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.675267 4897 ???:1] "http: TLS handshake error from 192.168.126.11:41182: no serving certificate available for the kubelet" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.809835 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca\") pod \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.810520 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca" (OuterVolumeSpecName: "client-ca") pod "ae9a4771-065d-4f75-8d15-0ea8525cbaf4" (UID: "ae9a4771-065d-4f75-8d15-0ea8525cbaf4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.809958 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78cjq"] Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.811035 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6l6f\" (UniqueName: \"kubernetes.io/projected/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-kube-api-access-h6l6f\") pod \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.811067 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert\") pod \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.811289 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config\") pod \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\" (UID: \"ae9a4771-065d-4f75-8d15-0ea8525cbaf4\") " Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.811876 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.812820 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config" (OuterVolumeSpecName: "config") pod "ae9a4771-065d-4f75-8d15-0ea8525cbaf4" (UID: "ae9a4771-065d-4f75-8d15-0ea8525cbaf4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.817858 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ae9a4771-065d-4f75-8d15-0ea8525cbaf4" (UID: "ae9a4771-065d-4f75-8d15-0ea8525cbaf4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.818933 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-kube-api-access-h6l6f" (OuterVolumeSpecName: "kube-api-access-h6l6f") pod "ae9a4771-065d-4f75-8d15-0ea8525cbaf4" (UID: "ae9a4771-065d-4f75-8d15-0ea8525cbaf4"). InnerVolumeSpecName "kube-api-access-h6l6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:20:26 crc kubenswrapper[4897]: E0228 13:20:26.834022 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 13:20:26 crc kubenswrapper[4897]: E0228 13:20:26.835324 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcgzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q9d2n_openshift-marketplace(657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:20:26 crc kubenswrapper[4897]: E0228 13:20:26.836543 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-q9d2n" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" Feb 28 13:20:26 crc kubenswrapper[4897]: E0228 13:20:26.852681 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 13:20:26 crc kubenswrapper[4897]: E0228 13:20:26.852847 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vmpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-whbtd_openshift-marketplace(fa5ec60f-f348-43b8-8ef2-9caafd08cb0d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:20:26 crc kubenswrapper[4897]: E0228 13:20:26.854369 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-whbtd" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.864943 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k72ms"] Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.883213 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:26 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:26 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:26 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.883256 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.913881 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6l6f\" (UniqueName: \"kubernetes.io/projected/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-kube-api-access-h6l6f\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.913919 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:26 crc kubenswrapper[4897]: I0228 13:20:26.913931 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9a4771-065d-4f75-8d15-0ea8525cbaf4-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.181497 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wj92z"] Feb 28 13:20:27 crc kubenswrapper[4897]: E0228 13:20:27.181705 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9a4771-065d-4f75-8d15-0ea8525cbaf4" containerName="route-controller-manager" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.181717 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9a4771-065d-4f75-8d15-0ea8525cbaf4" containerName="route-controller-manager" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.181821 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae9a4771-065d-4f75-8d15-0ea8525cbaf4" containerName="route-controller-manager" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.182479 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.185082 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.191597 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj92z"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.221393 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-utilities\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.221462 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4l9\" (UniqueName: \"kubernetes.io/projected/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-kube-api-access-gz4l9\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.221501 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-catalog-content\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.241484 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-549fb47d5d-22c8f"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.242076 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.246091 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.246927 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.246945 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.247098 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.247192 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.247254 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.249980 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.250944 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.262227 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.264904 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.270474 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549fb47d5d-22c8f"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.301004 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" event={"ID":"5a017c06-8f6f-4638-ae70-2715eb539d7c","Type":"ContainerStarted","Data":"b27bf6c1cececaecd48cd1b5cc3c3ec40cfbca32b0f0eae0d9a95944ccbadee8"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.301058 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" event={"ID":"5a017c06-8f6f-4638-ae70-2715eb539d7c","Type":"ContainerStarted","Data":"6a3aeb07bfe9f8d9907db70a3428fcda8f0d4ea8de442aa93865bd22c176d8d0"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.301107 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.309133 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"555fd921-4e06-4a2b-b800-744d83d5caf1","Type":"ContainerDied","Data":"3639bb91afec5432e07458f951a75adcf9eb4e0fc88f8506200500dcb8bfabbb"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.309291 4897 generic.go:334] "Generic (PLEG): container finished" podID="555fd921-4e06-4a2b-b800-744d83d5caf1" containerID="3639bb91afec5432e07458f951a75adcf9eb4e0fc88f8506200500dcb8bfabbb" exitCode=0 Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.311043 4897 generic.go:334] "Generic (PLEG): container finished" podID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerID="70606a1da6074a82386ff1f801b04303b4ac060b99e46385ab8a1af82f9e1156" exitCode=0 Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.311100 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78cjq" event={"ID":"488e35b2-95c6-4499-be2f-5a2d15cdf5d4","Type":"ContainerDied","Data":"70606a1da6074a82386ff1f801b04303b4ac060b99e46385ab8a1af82f9e1156"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.311124 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78cjq" event={"ID":"488e35b2-95c6-4499-be2f-5a2d15cdf5d4","Type":"ContainerStarted","Data":"52784002cf7dc5ad99990469b2235a950f38ac3917c63b25d3600f515726bdc3"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.315896 4897 generic.go:334] "Generic (PLEG): container finished" podID="ae9a4771-065d-4f75-8d15-0ea8525cbaf4" containerID="0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c" exitCode=0 Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.316004 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.318630 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" event={"ID":"ae9a4771-065d-4f75-8d15-0ea8525cbaf4","Type":"ContainerDied","Data":"0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.318677 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh" event={"ID":"ae9a4771-065d-4f75-8d15-0ea8525cbaf4","Type":"ContainerDied","Data":"ab34c38b156bfa5378119d719fe5bbf7996a0263d82cae5a6dae2099fb8cb15d"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.318697 4897 scope.go:117] "RemoveContainer" containerID="0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.322641 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-catalog-content\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.322718 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-utilities\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.322749 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz4l9\" (UniqueName: \"kubernetes.io/projected/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-kube-api-access-gz4l9\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.323341 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-catalog-content\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.323751 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" podStartSLOduration=185.323738182 podStartE2EDuration="3m5.323738182s" podCreationTimestamp="2026-02-28 13:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:27.3212173 +0000 UTC m=+241.563537957" watchObservedRunningTime="2026-02-28 13:20:27.323738182 +0000 UTC m=+241.566058839" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.323918 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-utilities\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.329475 4897 generic.go:334] "Generic (PLEG): container finished" podID="34293634-5315-4dac-94b9-258b99c8a9c1" containerID="06f84c36443935ec3e67baf28833ecd925caf77ef75595dcde049cc0a869d4c1" exitCode=0 Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.329518 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4slc" event={"ID":"34293634-5315-4dac-94b9-258b99c8a9c1","Type":"ContainerDied","Data":"06f84c36443935ec3e67baf28833ecd925caf77ef75595dcde049cc0a869d4c1"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.329539 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4slc" event={"ID":"34293634-5315-4dac-94b9-258b99c8a9c1","Type":"ContainerStarted","Data":"b121a8136e77ff642b674473b8a4601a6b70cb3d60c62cde801c44823a9e16b9"} Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.334367 4897 generic.go:334] "Generic (PLEG): container finished" podID="02ffda6c-19d5-465a-8db7-d094fb1590b8" containerID="abdb484db7e33ba4860907e3c29d6faac882ce62e597f97339c310e001b9b1dd" exitCode=0 Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.334617 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02ffda6c-19d5-465a-8db7-d094fb1590b8","Type":"ContainerDied","Data":"abdb484db7e33ba4860907e3c29d6faac882ce62e597f97339c310e001b9b1dd"} Feb 28 13:20:27 crc kubenswrapper[4897]: E0228 13:20:27.335638 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-whbtd" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" Feb 28 13:20:27 crc kubenswrapper[4897]: E0228 13:20:27.335893 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q9d2n" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.352621 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz4l9\" (UniqueName: \"kubernetes.io/projected/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-kube-api-access-gz4l9\") pod \"redhat-operators-wj92z\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.367957 4897 scope.go:117] "RemoveContainer" containerID="0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c" Feb 28 13:20:27 crc kubenswrapper[4897]: E0228 13:20:27.368655 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c\": container with ID starting with 0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c not found: ID does not exist" containerID="0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.368693 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c"} err="failed to get container status \"0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c\": rpc error: code = NotFound desc = could not find container \"0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c\": container with ID starting with 0832ab0238e1850b169083be778186663f3906313ee5b91d98e4d96f75c4b08c not found: ID does not exist" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.401896 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.404270 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jwnlh"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.424603 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24aab8b8-211b-4f6d-8cba-81fd27a8f890-serving-cert\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.424669 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-serving-cert\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.424693 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-config\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.424717 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcpzr\" (UniqueName: \"kubernetes.io/projected/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-kube-api-access-zcpzr\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.424750 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-client-ca\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.424826 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxfp\" (UniqueName: \"kubernetes.io/projected/24aab8b8-211b-4f6d-8cba-81fd27a8f890-kube-api-access-wzxfp\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.424846 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-config\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.424998 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-client-ca\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.425135 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-proxy-ca-bundles\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.515165 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526388 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-client-ca\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526430 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxfp\" (UniqueName: \"kubernetes.io/projected/24aab8b8-211b-4f6d-8cba-81fd27a8f890-kube-api-access-wzxfp\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526450 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-config\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526502 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-client-ca\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526520 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-proxy-ca-bundles\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526541 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24aab8b8-211b-4f6d-8cba-81fd27a8f890-serving-cert\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526563 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-serving-cert\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526584 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-config\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.526608 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcpzr\" (UniqueName: \"kubernetes.io/projected/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-kube-api-access-zcpzr\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.527617 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-client-ca\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.528522 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-proxy-ca-bundles\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.528964 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-config\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.530380 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-client-ca\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.530920 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-config\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.532206 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-serving-cert\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.543091 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcpzr\" (UniqueName: \"kubernetes.io/projected/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-kube-api-access-zcpzr\") pod \"route-controller-manager-5969cf9b49-8bk2x\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.543388 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24aab8b8-211b-4f6d-8cba-81fd27a8f890-serving-cert\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.547125 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxfp\" (UniqueName: \"kubernetes.io/projected/24aab8b8-211b-4f6d-8cba-81fd27a8f890-kube-api-access-wzxfp\") pod \"controller-manager-549fb47d5d-22c8f\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.572482 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qxsqd"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.573538 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.579489 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.581333 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxsqd"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.596155 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.729361 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-utilities\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.729438 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pt6h\" (UniqueName: \"kubernetes.io/projected/a0865f08-bed5-4fbb-ab37-582862fb0616-kube-api-access-4pt6h\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.729481 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-catalog-content\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.790134 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj92z"] Feb 28 13:20:27 crc kubenswrapper[4897]: W0228 13:20:27.805718 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1acb2f9f_f650_4f19_965e_48ba5a1ddac2.slice/crio-1b47d95db96a46929c8fdf1921bccd0d9804289caa51c990f3ef54460a7a7bbe WatchSource:0}: Error finding container 1b47d95db96a46929c8fdf1921bccd0d9804289caa51c990f3ef54460a7a7bbe: Status 404 returned error can't find the container with id 1b47d95db96a46929c8fdf1921bccd0d9804289caa51c990f3ef54460a7a7bbe Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.831195 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-utilities\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.831549 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pt6h\" (UniqueName: \"kubernetes.io/projected/a0865f08-bed5-4fbb-ab37-582862fb0616-kube-api-access-4pt6h\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.831580 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-catalog-content\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.831994 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-catalog-content\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.832423 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-utilities\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.838496 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549fb47d5d-22c8f"] Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.852634 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pt6h\" (UniqueName: \"kubernetes.io/projected/a0865f08-bed5-4fbb-ab37-582862fb0616-kube-api-access-4pt6h\") pod \"redhat-operators-qxsqd\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:27 crc kubenswrapper[4897]: W0228 13:20:27.864165 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24aab8b8_211b_4f6d_8cba_81fd27a8f890.slice/crio-58c33ec559049af021665c6c9d36bdc89623f94ab8e360c47978a806979ca280 WatchSource:0}: Error finding container 58c33ec559049af021665c6c9d36bdc89623f94ab8e360c47978a806979ca280: Status 404 returned error can't find the container with id 58c33ec559049af021665c6c9d36bdc89623f94ab8e360c47978a806979ca280 Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.885386 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:27 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:27 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:27 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.885467 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:27 crc kubenswrapper[4897]: I0228 13:20:27.932752 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.057811 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x"] Feb 28 13:20:28 crc kubenswrapper[4897]: W0228 13:20:28.090645 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09c1992d_7a3a_4d66_9a74_d20f4e6b2136.slice/crio-da9c1a417d789ef69ef71cf3084aa93479bde413a52389bf2fb6cf6dd174bd14 WatchSource:0}: Error finding container da9c1a417d789ef69ef71cf3084aa93479bde413a52389bf2fb6cf6dd174bd14: Status 404 returned error can't find the container with id da9c1a417d789ef69ef71cf3084aa93479bde413a52389bf2fb6cf6dd174bd14 Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.185868 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxsqd"] Feb 28 13:20:28 crc kubenswrapper[4897]: W0228 13:20:28.215436 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0865f08_bed5_4fbb_ab37_582862fb0616.slice/crio-38eea4249cb1b95915ee141b34ba9ab3088117c538a8d32d694fb368666550ae WatchSource:0}: Error finding container 38eea4249cb1b95915ee141b34ba9ab3088117c538a8d32d694fb368666550ae: Status 404 returned error can't find the container with id 38eea4249cb1b95915ee141b34ba9ab3088117c538a8d32d694fb368666550ae Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.345849 4897 generic.go:334] "Generic (PLEG): container finished" podID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerID="62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6" exitCode=0 Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.345905 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj92z" event={"ID":"1acb2f9f-f650-4f19-965e-48ba5a1ddac2","Type":"ContainerDied","Data":"62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6"} Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.345930 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj92z" event={"ID":"1acb2f9f-f650-4f19-965e-48ba5a1ddac2","Type":"ContainerStarted","Data":"1b47d95db96a46929c8fdf1921bccd0d9804289caa51c990f3ef54460a7a7bbe"} Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.353355 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" event={"ID":"09c1992d-7a3a-4d66-9a74-d20f4e6b2136","Type":"ContainerStarted","Data":"da9c1a417d789ef69ef71cf3084aa93479bde413a52389bf2fb6cf6dd174bd14"} Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.358429 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" event={"ID":"24aab8b8-211b-4f6d-8cba-81fd27a8f890","Type":"ContainerStarted","Data":"fa52e6ac4bfc09e1d3be6b19c754026afcd479445a240c9f4eedb5bb2eef55f7"} Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.358472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" event={"ID":"24aab8b8-211b-4f6d-8cba-81fd27a8f890","Type":"ContainerStarted","Data":"58c33ec559049af021665c6c9d36bdc89623f94ab8e360c47978a806979ca280"} Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.359514 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.370027 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxsqd" event={"ID":"a0865f08-bed5-4fbb-ab37-582862fb0616","Type":"ContainerStarted","Data":"38eea4249cb1b95915ee141b34ba9ab3088117c538a8d32d694fb368666550ae"} Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.373499 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.415583 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" podStartSLOduration=3.41556933 podStartE2EDuration="3.41556933s" podCreationTimestamp="2026-02-28 13:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:28.393098794 +0000 UTC m=+242.635419461" watchObservedRunningTime="2026-02-28 13:20:28.41556933 +0000 UTC m=+242.657889987" Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.465716 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae9a4771-065d-4f75-8d15-0ea8525cbaf4" path="/var/lib/kubelet/pods/ae9a4771-065d-4f75-8d15-0ea8525cbaf4/volumes" Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.883144 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:28 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:28 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:28 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:28 crc kubenswrapper[4897]: I0228 13:20:28.883203 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.245157 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.253080 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-t87x8" Feb 28 13:20:29 crc kubenswrapper[4897]: E0228 13:20:29.360832 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:20:29 crc kubenswrapper[4897]: E0228 13:20:29.360974 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:20:29 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:20:29 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9xph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538080-qcrrw_openshift-infra(a52c7385-4178-4038-93b0-5cd758958e80): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:20:29 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:20:29 crc kubenswrapper[4897]: E0228 13:20:29.362653 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" podUID="a52c7385-4178-4038-93b0-5cd758958e80" Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.409697 4897 generic.go:334] "Generic (PLEG): container finished" podID="b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1" containerID="4b6e793d221218556bd5e1f277096807ef26420eb39f14ae322206c1413b84c5" exitCode=0 Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.409771 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" event={"ID":"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1","Type":"ContainerDied","Data":"4b6e793d221218556bd5e1f277096807ef26420eb39f14ae322206c1413b84c5"} Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.414995 4897 generic.go:334] "Generic (PLEG): container finished" podID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerID="55729215e7744bd24ed0f0fb9f35e8a428629eca1ab6ce299a83e1f8a3b60d67" exitCode=0 Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.415095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxsqd" event={"ID":"a0865f08-bed5-4fbb-ab37-582862fb0616","Type":"ContainerDied","Data":"55729215e7744bd24ed0f0fb9f35e8a428629eca1ab6ce299a83e1f8a3b60d67"} Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.424930 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" event={"ID":"09c1992d-7a3a-4d66-9a74-d20f4e6b2136","Type":"ContainerStarted","Data":"b74bf0b158da5a51a5767f8fcc0b0db8362255a2e168e48cc585a06853a72cb4"} Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.425011 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.457456 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.474376 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" podStartSLOduration=4.474351466 podStartE2EDuration="4.474351466s" podCreationTimestamp="2026-02-28 13:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:20:29.464503873 +0000 UTC m=+243.706824530" watchObservedRunningTime="2026-02-28 13:20:29.474351466 +0000 UTC m=+243.716672143" Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.710887 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6l67l" Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.880623 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:29 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:29 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:29 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:29 crc kubenswrapper[4897]: I0228 13:20:29.880697 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:30 crc kubenswrapper[4897]: I0228 13:20:30.633843 4897 ???:1] "http: TLS handshake error from 192.168.126.11:41190: no serving certificate available for the kubelet" Feb 28 13:20:30 crc kubenswrapper[4897]: I0228 13:20:30.880673 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:30 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:30 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:30 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:30 crc kubenswrapper[4897]: I0228 13:20:30.880735 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:31 crc kubenswrapper[4897]: I0228 13:20:31.816884 4897 ???:1] "http: TLS handshake error from 192.168.126.11:52164: no serving certificate available for the kubelet" Feb 28 13:20:31 crc kubenswrapper[4897]: I0228 13:20:31.879968 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:31 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:31 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:31 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:31 crc kubenswrapper[4897]: I0228 13:20:31.880034 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:32 crc kubenswrapper[4897]: I0228 13:20:32.880057 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:32 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:32 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:32 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:32 crc kubenswrapper[4897]: I0228 13:20:32.880106 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.371056 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.371345 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.436353 4897 patch_prober.go:28] interesting pod/console-f9d7485db-rd9tl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.436408 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rd9tl" podUID="3423cf07-c57b-41f3-82da-f497649699db" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.582288 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-fv2rz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.582354 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fv2rz" podUID="4f2510cb-e89f-49a0-b5cd-aca1a5c51178" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.582369 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-fv2rz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.582433 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fv2rz" podUID="4f2510cb-e89f-49a0-b5cd-aca1a5c51178" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.762469 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.763061 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.767880 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.881354 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:33 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:33 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:33 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.881402 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963038 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-config-volume\") pod \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963099 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02ffda6c-19d5-465a-8db7-d094fb1590b8-kubelet-dir\") pod \"02ffda6c-19d5-465a-8db7-d094fb1590b8\" (UID: \"02ffda6c-19d5-465a-8db7-d094fb1590b8\") " Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963127 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/555fd921-4e06-4a2b-b800-744d83d5caf1-kube-api-access\") pod \"555fd921-4e06-4a2b-b800-744d83d5caf1\" (UID: \"555fd921-4e06-4a2b-b800-744d83d5caf1\") " Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963196 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-secret-volume\") pod \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963217 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02ffda6c-19d5-465a-8db7-d094fb1590b8-kube-api-access\") pod \"02ffda6c-19d5-465a-8db7-d094fb1590b8\" (UID: \"02ffda6c-19d5-465a-8db7-d094fb1590b8\") " Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963235 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ffda6c-19d5-465a-8db7-d094fb1590b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "02ffda6c-19d5-465a-8db7-d094fb1590b8" (UID: "02ffda6c-19d5-465a-8db7-d094fb1590b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963257 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w28d\" (UniqueName: \"kubernetes.io/projected/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-kube-api-access-5w28d\") pod \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\" (UID: \"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1\") " Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963395 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/555fd921-4e06-4a2b-b800-744d83d5caf1-kubelet-dir\") pod \"555fd921-4e06-4a2b-b800-744d83d5caf1\" (UID: \"555fd921-4e06-4a2b-b800-744d83d5caf1\") " Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963732 4897 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02ffda6c-19d5-465a-8db7-d094fb1590b8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963775 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/555fd921-4e06-4a2b-b800-744d83d5caf1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "555fd921-4e06-4a2b-b800-744d83d5caf1" (UID: "555fd921-4e06-4a2b-b800-744d83d5caf1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.963837 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1" (UID: "b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.969044 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1" (UID: "b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.969124 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-kube-api-access-5w28d" (OuterVolumeSpecName: "kube-api-access-5w28d") pod "b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1" (UID: "b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1"). InnerVolumeSpecName "kube-api-access-5w28d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.969433 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ffda6c-19d5-465a-8db7-d094fb1590b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "02ffda6c-19d5-465a-8db7-d094fb1590b8" (UID: "02ffda6c-19d5-465a-8db7-d094fb1590b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:20:33 crc kubenswrapper[4897]: I0228 13:20:33.971952 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555fd921-4e06-4a2b-b800-744d83d5caf1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "555fd921-4e06-4a2b-b800-744d83d5caf1" (UID: "555fd921-4e06-4a2b-b800-744d83d5caf1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.064211 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.064248 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/555fd921-4e06-4a2b-b800-744d83d5caf1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.064259 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.064268 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02ffda6c-19d5-465a-8db7-d094fb1590b8-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.064277 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w28d\" (UniqueName: \"kubernetes.io/projected/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1-kube-api-access-5w28d\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.064288 4897 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/555fd921-4e06-4a2b-b800-744d83d5caf1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.455832 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.457080 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.458516 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.469892 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02ffda6c-19d5-465a-8db7-d094fb1590b8","Type":"ContainerDied","Data":"21b02eba369ff60862106cac5e653b95a141f48643d7041f29b024f04b1355b3"} Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.469931 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21b02eba369ff60862106cac5e653b95a141f48643d7041f29b024f04b1355b3" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.469942 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824" event={"ID":"b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1","Type":"ContainerDied","Data":"a49cee6f63e1480b2ca684d6771170e899ad128127082bca41eafabb6c247d64"} Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.469951 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a49cee6f63e1480b2ca684d6771170e899ad128127082bca41eafabb6c247d64" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.469960 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"555fd921-4e06-4a2b-b800-744d83d5caf1","Type":"ContainerDied","Data":"43fb8d2fd4e22e466a06d3316636bd75abccfdb78ea3da1dea19707c46f90b92"} Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.469968 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43fb8d2fd4e22e466a06d3316636bd75abccfdb78ea3da1dea19707c46f90b92" Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.880105 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:34 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:34 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:34 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:34 crc kubenswrapper[4897]: I0228 13:20:34.880154 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:35 crc kubenswrapper[4897]: I0228 13:20:35.880699 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:35 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:35 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:35 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:35 crc kubenswrapper[4897]: I0228 13:20:35.880798 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:36 crc kubenswrapper[4897]: I0228 13:20:36.880003 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:36 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:36 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:36 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:36 crc kubenswrapper[4897]: I0228 13:20:36.880156 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:37 crc kubenswrapper[4897]: I0228 13:20:37.881946 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:37 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:37 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:37 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:37 crc kubenswrapper[4897]: I0228 13:20:37.882075 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:38 crc kubenswrapper[4897]: I0228 13:20:38.879905 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:38 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:38 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:38 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:38 crc kubenswrapper[4897]: I0228 13:20:38.880229 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:39 crc kubenswrapper[4897]: E0228 13:20:39.456831 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" podUID="a52c7385-4178-4038-93b0-5cd758958e80" Feb 28 13:20:39 crc kubenswrapper[4897]: I0228 13:20:39.879998 4897 patch_prober.go:28] interesting pod/router-default-5444994796-5fwp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 13:20:39 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 28 13:20:39 crc kubenswrapper[4897]: [+]process-running ok Feb 28 13:20:39 crc kubenswrapper[4897]: healthz check failed Feb 28 13:20:39 crc kubenswrapper[4897]: I0228 13:20:39.880055 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5fwp4" podUID="342631a0-9c4d-4e4f-9743-4d13ea740a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 13:20:40 crc kubenswrapper[4897]: I0228 13:20:40.880717 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:40 crc kubenswrapper[4897]: I0228 13:20:40.882858 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5fwp4" Feb 28 13:20:41 crc kubenswrapper[4897]: I0228 13:20:41.218013 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 13:20:43 crc kubenswrapper[4897]: I0228 13:20:43.436056 4897 patch_prober.go:28] interesting pod/console-f9d7485db-rd9tl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 28 13:20:43 crc kubenswrapper[4897]: I0228 13:20:43.436580 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rd9tl" podUID="3423cf07-c57b-41f3-82da-f497649699db" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 28 13:20:43 crc kubenswrapper[4897]: I0228 13:20:43.595825 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fv2rz" Feb 28 13:20:44 crc kubenswrapper[4897]: I0228 13:20:44.482843 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549fb47d5d-22c8f"] Feb 28 13:20:44 crc kubenswrapper[4897]: I0228 13:20:44.486177 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" podUID="24aab8b8-211b-4f6d-8cba-81fd27a8f890" containerName="controller-manager" containerID="cri-o://fa52e6ac4bfc09e1d3be6b19c754026afcd479445a240c9f4eedb5bb2eef55f7" gracePeriod=30 Feb 28 13:20:44 crc kubenswrapper[4897]: I0228 13:20:44.491780 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x"] Feb 28 13:20:44 crc kubenswrapper[4897]: I0228 13:20:44.492015 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" podUID="09c1992d-7a3a-4d66-9a74-d20f4e6b2136" containerName="route-controller-manager" containerID="cri-o://b74bf0b158da5a51a5767f8fcc0b0db8362255a2e168e48cc585a06853a72cb4" gracePeriod=30 Feb 28 13:20:44 crc kubenswrapper[4897]: E0228 13:20:44.819459 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 13:20:44 crc kubenswrapper[4897]: E0228 13:20:44.819600 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcgzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q9d2n_openshift-marketplace(657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:20:44 crc kubenswrapper[4897]: E0228 13:20:44.820900 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-q9d2n" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" Feb 28 13:20:44 crc kubenswrapper[4897]: E0228 13:20:44.927269 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 13:20:44 crc kubenswrapper[4897]: E0228 13:20:44.941491 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vmpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-whbtd_openshift-marketplace(fa5ec60f-f348-43b8-8ef2-9caafd08cb0d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:20:44 crc kubenswrapper[4897]: E0228 13:20:44.943690 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-whbtd" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" Feb 28 13:20:45 crc kubenswrapper[4897]: I0228 13:20:45.533998 4897 generic.go:334] "Generic (PLEG): container finished" podID="09c1992d-7a3a-4d66-9a74-d20f4e6b2136" containerID="b74bf0b158da5a51a5767f8fcc0b0db8362255a2e168e48cc585a06853a72cb4" exitCode=0 Feb 28 13:20:45 crc kubenswrapper[4897]: I0228 13:20:45.534114 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" event={"ID":"09c1992d-7a3a-4d66-9a74-d20f4e6b2136","Type":"ContainerDied","Data":"b74bf0b158da5a51a5767f8fcc0b0db8362255a2e168e48cc585a06853a72cb4"} Feb 28 13:20:45 crc kubenswrapper[4897]: I0228 13:20:45.537386 4897 generic.go:334] "Generic (PLEG): container finished" podID="24aab8b8-211b-4f6d-8cba-81fd27a8f890" containerID="fa52e6ac4bfc09e1d3be6b19c754026afcd479445a240c9f4eedb5bb2eef55f7" exitCode=0 Feb 28 13:20:45 crc kubenswrapper[4897]: I0228 13:20:45.537438 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" event={"ID":"24aab8b8-211b-4f6d-8cba-81fd27a8f890","Type":"ContainerDied","Data":"fa52e6ac4bfc09e1d3be6b19c754026afcd479445a240c9f4eedb5bb2eef55f7"} Feb 28 13:20:46 crc kubenswrapper[4897]: E0228 13:20:46.123445 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:20:46 crc kubenswrapper[4897]: E0228 13:20:46.123588 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94tpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bfpj4_openshift-marketplace(c752ba9a-f6f8-4530-91a9-c06ff609e9d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 13:20:46 crc kubenswrapper[4897]: E0228 13:20:46.124850 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bfpj4" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" Feb 28 13:20:46 crc kubenswrapper[4897]: I0228 13:20:46.417630 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:20:47 crc kubenswrapper[4897]: E0228 13:20:47.577752 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bfpj4" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" Feb 28 13:20:47 crc kubenswrapper[4897]: I0228 13:20:47.580128 4897 patch_prober.go:28] interesting pod/controller-manager-549fb47d5d-22c8f container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 28 13:20:47 crc kubenswrapper[4897]: I0228 13:20:47.580197 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" podUID="24aab8b8-211b-4f6d-8cba-81fd27a8f890" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 28 13:20:47 crc kubenswrapper[4897]: I0228 13:20:47.597951 4897 patch_prober.go:28] interesting pod/route-controller-manager-5969cf9b49-8bk2x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 28 13:20:47 crc kubenswrapper[4897]: I0228 13:20:47.598017 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" podUID="09c1992d-7a3a-4d66-9a74-d20f4e6b2136" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 28 13:20:49 crc kubenswrapper[4897]: E0228 13:20:49.422849 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:20:49 crc kubenswrapper[4897]: E0228 13:20:49.423289 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:20:49 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:20:49 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntptc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538078-hj8mj_openshift-infra(79743a51-c0b2-45b2-99d3-385e0b2f2c6f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 28 13:20:49 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:20:49 crc kubenswrapper[4897]: E0228 13:20:49.424527 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" podUID="79743a51-c0b2-45b2-99d3-385e0b2f2c6f" Feb 28 13:20:49 crc kubenswrapper[4897]: E0228 13:20:49.561936 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" podUID="79743a51-c0b2-45b2-99d3-385e0b2f2c6f" Feb 28 13:20:49 crc kubenswrapper[4897]: E0228 13:20:49.768159 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:20:49 crc kubenswrapper[4897]: E0228 13:20:49.768361 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbsfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sv5dr_openshift-marketplace(c8e82c23-54f4-43a4-904b-4f90348580ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 13:20:49 crc kubenswrapper[4897]: E0228 13:20:49.769524 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sv5dr" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" Feb 28 13:20:51 crc kubenswrapper[4897]: E0228 13:20:51.608415 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sv5dr" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.655609 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.660608 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.701379 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9"] Feb 28 13:20:51 crc kubenswrapper[4897]: E0228 13:20:51.702559 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c1992d-7a3a-4d66-9a74-d20f4e6b2136" containerName="route-controller-manager" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.702615 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c1992d-7a3a-4d66-9a74-d20f4e6b2136" containerName="route-controller-manager" Feb 28 13:20:51 crc kubenswrapper[4897]: E0228 13:20:51.702650 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ffda6c-19d5-465a-8db7-d094fb1590b8" containerName="pruner" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.702667 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ffda6c-19d5-465a-8db7-d094fb1590b8" containerName="pruner" Feb 28 13:20:51 crc kubenswrapper[4897]: E0228 13:20:51.702699 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1" containerName="collect-profiles" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.702716 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1" containerName="collect-profiles" Feb 28 13:20:51 crc kubenswrapper[4897]: E0228 13:20:51.702747 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555fd921-4e06-4a2b-b800-744d83d5caf1" containerName="pruner" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.702764 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="555fd921-4e06-4a2b-b800-744d83d5caf1" containerName="pruner" Feb 28 13:20:51 crc kubenswrapper[4897]: E0228 13:20:51.702792 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24aab8b8-211b-4f6d-8cba-81fd27a8f890" containerName="controller-manager" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.702810 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="24aab8b8-211b-4f6d-8cba-81fd27a8f890" containerName="controller-manager" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.703064 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1" containerName="collect-profiles" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.703094 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ffda6c-19d5-465a-8db7-d094fb1590b8" containerName="pruner" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.703115 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="24aab8b8-211b-4f6d-8cba-81fd27a8f890" containerName="controller-manager" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.703141 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="555fd921-4e06-4a2b-b800-744d83d5caf1" containerName="pruner" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.703162 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c1992d-7a3a-4d66-9a74-d20f4e6b2136" containerName="route-controller-manager" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.704044 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.713594 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9"] Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787041 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-config\") pod \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787112 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-config\") pod \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787152 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-serving-cert\") pod \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787244 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzxfp\" (UniqueName: \"kubernetes.io/projected/24aab8b8-211b-4f6d-8cba-81fd27a8f890-kube-api-access-wzxfp\") pod \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787298 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-client-ca\") pod \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787390 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-client-ca\") pod \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787432 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcpzr\" (UniqueName: \"kubernetes.io/projected/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-kube-api-access-zcpzr\") pod \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\" (UID: \"09c1992d-7a3a-4d66-9a74-d20f4e6b2136\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787485 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-proxy-ca-bundles\") pod \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.787524 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24aab8b8-211b-4f6d-8cba-81fd27a8f890-serving-cert\") pod \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\" (UID: \"24aab8b8-211b-4f6d-8cba-81fd27a8f890\") " Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.788063 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-config" (OuterVolumeSpecName: "config") pod "09c1992d-7a3a-4d66-9a74-d20f4e6b2136" (UID: "09c1992d-7a3a-4d66-9a74-d20f4e6b2136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.788113 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "24aab8b8-211b-4f6d-8cba-81fd27a8f890" (UID: "24aab8b8-211b-4f6d-8cba-81fd27a8f890"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.788195 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-client-ca" (OuterVolumeSpecName: "client-ca") pod "24aab8b8-211b-4f6d-8cba-81fd27a8f890" (UID: "24aab8b8-211b-4f6d-8cba-81fd27a8f890"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.788246 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-client-ca" (OuterVolumeSpecName: "client-ca") pod "09c1992d-7a3a-4d66-9a74-d20f4e6b2136" (UID: "09c1992d-7a3a-4d66-9a74-d20f4e6b2136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.788621 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.788654 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.788670 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.788687 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.789243 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-config" (OuterVolumeSpecName: "config") pod "24aab8b8-211b-4f6d-8cba-81fd27a8f890" (UID: "24aab8b8-211b-4f6d-8cba-81fd27a8f890"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.794152 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24aab8b8-211b-4f6d-8cba-81fd27a8f890-kube-api-access-wzxfp" (OuterVolumeSpecName: "kube-api-access-wzxfp") pod "24aab8b8-211b-4f6d-8cba-81fd27a8f890" (UID: "24aab8b8-211b-4f6d-8cba-81fd27a8f890"). InnerVolumeSpecName "kube-api-access-wzxfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.794180 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09c1992d-7a3a-4d66-9a74-d20f4e6b2136" (UID: "09c1992d-7a3a-4d66-9a74-d20f4e6b2136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.794521 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24aab8b8-211b-4f6d-8cba-81fd27a8f890-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "24aab8b8-211b-4f6d-8cba-81fd27a8f890" (UID: "24aab8b8-211b-4f6d-8cba-81fd27a8f890"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.802009 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-kube-api-access-zcpzr" (OuterVolumeSpecName: "kube-api-access-zcpzr") pod "09c1992d-7a3a-4d66-9a74-d20f4e6b2136" (UID: "09c1992d-7a3a-4d66-9a74-d20f4e6b2136"). InnerVolumeSpecName "kube-api-access-zcpzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.889793 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-client-ca\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.889864 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tpmw\" (UniqueName: \"kubernetes.io/projected/2b1f2de5-5f60-475e-b75d-597c33c23110-kube-api-access-2tpmw\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.889905 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1f2de5-5f60-475e-b75d-597c33c23110-serving-cert\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.889951 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-config\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.890012 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24aab8b8-211b-4f6d-8cba-81fd27a8f890-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.890026 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.890038 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzxfp\" (UniqueName: \"kubernetes.io/projected/24aab8b8-211b-4f6d-8cba-81fd27a8f890-kube-api-access-wzxfp\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.890052 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcpzr\" (UniqueName: \"kubernetes.io/projected/09c1992d-7a3a-4d66-9a74-d20f4e6b2136-kube-api-access-zcpzr\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.890064 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24aab8b8-211b-4f6d-8cba-81fd27a8f890-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.991556 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-config\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.991698 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-client-ca\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.991786 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tpmw\" (UniqueName: \"kubernetes.io/projected/2b1f2de5-5f60-475e-b75d-597c33c23110-kube-api-access-2tpmw\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.991844 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1f2de5-5f60-475e-b75d-597c33c23110-serving-cert\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.993563 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-client-ca\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:51 crc kubenswrapper[4897]: I0228 13:20:51.994803 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-config\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.002799 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1f2de5-5f60-475e-b75d-597c33c23110-serving-cert\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.019735 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tpmw\" (UniqueName: \"kubernetes.io/projected/2b1f2de5-5f60-475e-b75d-597c33c23110-kube-api-access-2tpmw\") pod \"route-controller-manager-547f5899ff-gjxf9\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.027070 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.332043 4897 ???:1] "http: TLS handshake error from 192.168.126.11:54374: no serving certificate available for the kubelet" Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.587453 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.587439 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549fb47d5d-22c8f" event={"ID":"24aab8b8-211b-4f6d-8cba-81fd27a8f890","Type":"ContainerDied","Data":"58c33ec559049af021665c6c9d36bdc89623f94ab8e360c47978a806979ca280"} Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.587646 4897 scope.go:117] "RemoveContainer" containerID="fa52e6ac4bfc09e1d3be6b19c754026afcd479445a240c9f4eedb5bb2eef55f7" Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.591578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" event={"ID":"09c1992d-7a3a-4d66-9a74-d20f4e6b2136","Type":"ContainerDied","Data":"da9c1a417d789ef69ef71cf3084aa93479bde413a52389bf2fb6cf6dd174bd14"} Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.591720 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x" Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.619766 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549fb47d5d-22c8f"] Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.624026 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-549fb47d5d-22c8f"] Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.630465 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x"] Feb 28 13:20:52 crc kubenswrapper[4897]: I0228 13:20:52.633575 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5969cf9b49-8bk2x"] Feb 28 13:20:53 crc kubenswrapper[4897]: I0228 13:20:53.442456 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:53 crc kubenswrapper[4897]: I0228 13:20:53.448224 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:20:53 crc kubenswrapper[4897]: I0228 13:20:53.983596 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vk9dl" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.267574 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c5c958b59-g78sr"] Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.268880 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.271213 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.271629 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.271802 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.271946 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.272421 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.273208 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.277365 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c5c958b59-g78sr"] Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.277587 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.381892 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-serving-cert\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.381941 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-client-ca\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.382002 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-kube-api-access-n6sqt\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.382069 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-config\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.382110 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-proxy-ca-bundles\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.464341 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c1992d-7a3a-4d66-9a74-d20f4e6b2136" path="/var/lib/kubelet/pods/09c1992d-7a3a-4d66-9a74-d20f4e6b2136/volumes" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.465212 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24aab8b8-211b-4f6d-8cba-81fd27a8f890" path="/var/lib/kubelet/pods/24aab8b8-211b-4f6d-8cba-81fd27a8f890/volumes" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.483679 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-proxy-ca-bundles\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.483746 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-serving-cert\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.483772 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-client-ca\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.483823 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-kube-api-access-n6sqt\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.483863 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-config\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.484921 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-client-ca\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.485639 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-proxy-ca-bundles\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.486643 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-config\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.489809 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-serving-cert\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.502247 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-kube-api-access-n6sqt\") pod \"controller-manager-5c5c958b59-g78sr\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:54 crc kubenswrapper[4897]: I0228 13:20:54.604452 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.336569 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.337916 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.340515 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.340647 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.361605 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.469770 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8877a37d-8255-48c2-ae08-424110bce430-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8877a37d-8255-48c2-ae08-424110bce430\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.469851 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8877a37d-8255-48c2-ae08-424110bce430-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8877a37d-8255-48c2-ae08-424110bce430\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.571151 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8877a37d-8255-48c2-ae08-424110bce430-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8877a37d-8255-48c2-ae08-424110bce430\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.571244 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8877a37d-8255-48c2-ae08-424110bce430-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8877a37d-8255-48c2-ae08-424110bce430\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.571928 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8877a37d-8255-48c2-ae08-424110bce430-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8877a37d-8255-48c2-ae08-424110bce430\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.604857 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8877a37d-8255-48c2-ae08-424110bce430-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8877a37d-8255-48c2-ae08-424110bce430\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:20:59 crc kubenswrapper[4897]: I0228 13:20:59.665654 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:21:02 crc kubenswrapper[4897]: E0228 13:21:02.519967 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-whbtd" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" Feb 28 13:21:02 crc kubenswrapper[4897]: E0228 13:21:02.520029 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q9d2n" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" Feb 28 13:21:02 crc kubenswrapper[4897]: E0228 13:21:02.814960 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 13:21:02 crc kubenswrapper[4897]: E0228 13:21:02.835746 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz4l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wj92z_openshift-marketplace(1acb2f9f-f650-4f19-965e-48ba5a1ddac2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 13:21:02 crc kubenswrapper[4897]: E0228 13:21:02.837495 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wj92z" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" Feb 28 13:21:02 crc kubenswrapper[4897]: I0228 13:21:02.863771 4897 scope.go:117] "RemoveContainer" containerID="b74bf0b158da5a51a5767f8fcc0b0db8362255a2e168e48cc585a06853a72cb4" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.327194 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.371255 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.371298 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.412069 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9"] Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.430764 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c5c958b59-g78sr"] Feb 28 13:21:03 crc kubenswrapper[4897]: W0228 13:21:03.443194 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5c2b9a6_accb_47c5_8d99_7c5d6fd0e922.slice/crio-dbd891aaf9e346c71efa281ab602b8871bbb2442b44ba2602dce4eb678229e9c WatchSource:0}: Error finding container dbd891aaf9e346c71efa281ab602b8871bbb2442b44ba2602dce4eb678229e9c: Status 404 returned error can't find the container with id dbd891aaf9e346c71efa281ab602b8871bbb2442b44ba2602dce4eb678229e9c Feb 28 13:21:03 crc kubenswrapper[4897]: W0228 13:21:03.443839 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8877a37d_8255_48c2_ae08_424110bce430.slice/crio-40b235b6bef96705934f35eecf31841de909ac1f5054ba573149c18baf6486cb WatchSource:0}: Error finding container 40b235b6bef96705934f35eecf31841de909ac1f5054ba573149c18baf6486cb: Status 404 returned error can't find the container with id 40b235b6bef96705934f35eecf31841de909ac1f5054ba573149c18baf6486cb Feb 28 13:21:03 crc kubenswrapper[4897]: W0228 13:21:03.445006 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b1f2de5_5f60_475e_b75d_597c33c23110.slice/crio-28c1a25f8f155d2565400cf2549f5fae95e94f9de56f974111a4dacaf92acc5a WatchSource:0}: Error finding container 28c1a25f8f155d2565400cf2549f5fae95e94f9de56f974111a4dacaf92acc5a: Status 404 returned error can't find the container with id 28c1a25f8f155d2565400cf2549f5fae95e94f9de56f974111a4dacaf92acc5a Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.670371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" event={"ID":"2b1f2de5-5f60-475e-b75d-597c33c23110","Type":"ContainerStarted","Data":"45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392"} Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.670766 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" event={"ID":"2b1f2de5-5f60-475e-b75d-597c33c23110","Type":"ContainerStarted","Data":"28c1a25f8f155d2565400cf2549f5fae95e94f9de56f974111a4dacaf92acc5a"} Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.671679 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.678759 4897 generic.go:334] "Generic (PLEG): container finished" podID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerID="8ef8f3a76e0ad39edf9489b3ef6e4f307bcfc81847722d50808fc3684a9310c7" exitCode=0 Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.678833 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78cjq" event={"ID":"488e35b2-95c6-4499-be2f-5a2d15cdf5d4","Type":"ContainerDied","Data":"8ef8f3a76e0ad39edf9489b3ef6e4f307bcfc81847722d50808fc3684a9310c7"} Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.680870 4897 generic.go:334] "Generic (PLEG): container finished" podID="34293634-5315-4dac-94b9-258b99c8a9c1" containerID="f3de6218310becb4e4ff8696eb60aa03364152b5e6c0cf43d9b7c7fde154684e" exitCode=0 Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.680973 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4slc" event={"ID":"34293634-5315-4dac-94b9-258b99c8a9c1","Type":"ContainerDied","Data":"f3de6218310becb4e4ff8696eb60aa03364152b5e6c0cf43d9b7c7fde154684e"} Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.682873 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8877a37d-8255-48c2-ae08-424110bce430","Type":"ContainerStarted","Data":"40b235b6bef96705934f35eecf31841de909ac1f5054ba573149c18baf6486cb"} Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.686385 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxsqd" event={"ID":"a0865f08-bed5-4fbb-ab37-582862fb0616","Type":"ContainerStarted","Data":"39c690dd4eaa0f2b66fe44ca9ed86bcc6d3a57f2a117be0314e7ac60f7d7bc28"} Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.688508 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" event={"ID":"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922","Type":"ContainerStarted","Data":"9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64"} Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.688561 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" event={"ID":"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922","Type":"ContainerStarted","Data":"dbd891aaf9e346c71efa281ab602b8871bbb2442b44ba2602dce4eb678229e9c"} Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.688675 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.689944 4897 patch_prober.go:28] interesting pod/controller-manager-5c5c958b59-g78sr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.689987 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" podUID="e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.690220 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" event={"ID":"a52c7385-4178-4038-93b0-5cd758958e80","Type":"ContainerStarted","Data":"2f2df56771d11b40b5190befed758d1b56777f1de09a0260ea556b12913cb5ba"} Feb 28 13:21:03 crc kubenswrapper[4897]: E0228 13:21:03.692666 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wj92z" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.693750 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" podStartSLOduration=19.693733819 podStartE2EDuration="19.693733819s" podCreationTimestamp="2026-02-28 13:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:21:03.692365594 +0000 UTC m=+277.934686281" watchObservedRunningTime="2026-02-28 13:21:03.693733819 +0000 UTC m=+277.936054476" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.719268 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" podStartSLOduration=19.719250631 podStartE2EDuration="19.719250631s" podCreationTimestamp="2026-02-28 13:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:21:03.71603965 +0000 UTC m=+277.958360307" watchObservedRunningTime="2026-02-28 13:21:03.719250631 +0000 UTC m=+277.961571288" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.727256 4897 csr.go:261] certificate signing request csr-kd8qw is approved, waiting to be issued Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.734593 4897 csr.go:257] certificate signing request csr-kd8qw is issued Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.760628 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" podStartSLOduration=15.1087416 podStartE2EDuration="1m3.760609721s" podCreationTimestamp="2026-02-28 13:20:00 +0000 UTC" firstStartedPulling="2026-02-28 13:20:14.383523521 +0000 UTC m=+228.625844178" lastFinishedPulling="2026-02-28 13:21:03.035391642 +0000 UTC m=+277.277712299" observedRunningTime="2026-02-28 13:21:03.758749435 +0000 UTC m=+278.001070092" watchObservedRunningTime="2026-02-28 13:21:03.760609721 +0000 UTC m=+278.002930378" Feb 28 13:21:03 crc kubenswrapper[4897]: I0228 13:21:03.977722 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.483500 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c5c958b59-g78sr"] Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.537006 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.537803 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.546979 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.560459 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.560536 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dce17ac4-0687-4628-beb9-963332095590-kube-api-access\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.560597 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-var-lock\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.593841 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9"] Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.611109 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.661737 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.661824 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dce17ac4-0687-4628-beb9-963332095590-kube-api-access\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.661874 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-var-lock\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.661939 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.662002 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-var-lock\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.690978 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dce17ac4-0687-4628-beb9-963332095590-kube-api-access\") pod \"installer-9-crc\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.697310 4897 generic.go:334] "Generic (PLEG): container finished" podID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerID="39c690dd4eaa0f2b66fe44ca9ed86bcc6d3a57f2a117be0314e7ac60f7d7bc28" exitCode=0 Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.697421 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxsqd" event={"ID":"a0865f08-bed5-4fbb-ab37-582862fb0616","Type":"ContainerDied","Data":"39c690dd4eaa0f2b66fe44ca9ed86bcc6d3a57f2a117be0314e7ac60f7d7bc28"} Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.713187 4897 generic.go:334] "Generic (PLEG): container finished" podID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerID="6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db" exitCode=0 Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.713272 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpj4" event={"ID":"c752ba9a-f6f8-4530-91a9-c06ff609e9d8","Type":"ContainerDied","Data":"6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db"} Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.719048 4897 generic.go:334] "Generic (PLEG): container finished" podID="a52c7385-4178-4038-93b0-5cd758958e80" containerID="2f2df56771d11b40b5190befed758d1b56777f1de09a0260ea556b12913cb5ba" exitCode=0 Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.719124 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" event={"ID":"a52c7385-4178-4038-93b0-5cd758958e80","Type":"ContainerDied","Data":"2f2df56771d11b40b5190befed758d1b56777f1de09a0260ea556b12913cb5ba"} Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.724711 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78cjq" event={"ID":"488e35b2-95c6-4499-be2f-5a2d15cdf5d4","Type":"ContainerStarted","Data":"ccdc6d7f258364deceeac741fe44bd4ab957785aa67728fbee3350dec13779ce"} Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.727996 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4slc" event={"ID":"34293634-5315-4dac-94b9-258b99c8a9c1","Type":"ContainerStarted","Data":"5755f85ed2d7001e68d6c24610ce541ef32a2cb42accd6d50993cddf43d4b1b8"} Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.729554 4897 generic.go:334] "Generic (PLEG): container finished" podID="8877a37d-8255-48c2-ae08-424110bce430" containerID="ae99f218478dd3ecc0e86684c599c8b60daf5310d74c2f32b8e93a3e8fc63ef6" exitCode=0 Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.729586 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8877a37d-8255-48c2-ae08-424110bce430","Type":"ContainerDied","Data":"ae99f218478dd3ecc0e86684c599c8b60daf5310d74c2f32b8e93a3e8fc63ef6"} Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.735336 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-11 01:38:32.873140539 +0000 UTC Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.735360 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6132h17m28.137782499s for next certificate rotation Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.765785 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-78cjq" podStartSLOduration=1.963109521 podStartE2EDuration="38.765769525s" podCreationTimestamp="2026-02-28 13:20:26 +0000 UTC" firstStartedPulling="2026-02-28 13:20:27.313045422 +0000 UTC m=+241.555366079" lastFinishedPulling="2026-02-28 13:21:04.115705426 +0000 UTC m=+278.358026083" observedRunningTime="2026-02-28 13:21:04.761764534 +0000 UTC m=+279.004085211" watchObservedRunningTime="2026-02-28 13:21:04.765769525 +0000 UTC m=+279.008090182" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.793441 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4slc" podStartSLOduration=3.061267205 podStartE2EDuration="39.793418461s" podCreationTimestamp="2026-02-28 13:20:25 +0000 UTC" firstStartedPulling="2026-02-28 13:20:27.330669609 +0000 UTC m=+241.572990266" lastFinishedPulling="2026-02-28 13:21:04.062820865 +0000 UTC m=+278.305141522" observedRunningTime="2026-02-28 13:21:04.788014305 +0000 UTC m=+279.030334982" watchObservedRunningTime="2026-02-28 13:21:04.793418461 +0000 UTC m=+279.035739128" Feb 28 13:21:04 crc kubenswrapper[4897]: I0228 13:21:04.863304 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.327092 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.735763 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpj4" event={"ID":"c752ba9a-f6f8-4530-91a9-c06ff609e9d8","Type":"ContainerStarted","Data":"cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675"} Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.737215 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dce17ac4-0687-4628-beb9-963332095590","Type":"ContainerStarted","Data":"39a50ad07e02aab779a23a17dec69b7882fd93c0d2a181638fefc722681a2084"} Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.737249 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dce17ac4-0687-4628-beb9-963332095590","Type":"ContainerStarted","Data":"2c0f407a5bb10fac140d0a45cfa257b01172d63d71054e2eb629c4ef2f5e8462"} Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.739231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxsqd" event={"ID":"a0865f08-bed5-4fbb-ab37-582862fb0616","Type":"ContainerStarted","Data":"908b942f242e9e943991a7f708eeef977013caecb7cf540ab42ca24fe731469b"} Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.740115 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" podUID="e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" containerName="controller-manager" containerID="cri-o://9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64" gracePeriod=30 Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.740160 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" podUID="2b1f2de5-5f60-475e-b75d-597c33c23110" containerName="route-controller-manager" containerID="cri-o://45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392" gracePeriod=30 Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.781087 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bfpj4" podStartSLOduration=3.6252109519999998 podStartE2EDuration="42.781070625s" podCreationTimestamp="2026-02-28 13:20:23 +0000 UTC" firstStartedPulling="2026-02-28 13:20:26.251751444 +0000 UTC m=+240.494072101" lastFinishedPulling="2026-02-28 13:21:05.407611097 +0000 UTC m=+279.649931774" observedRunningTime="2026-02-28 13:21:05.778573922 +0000 UTC m=+280.020894579" watchObservedRunningTime="2026-02-28 13:21:05.781070625 +0000 UTC m=+280.023391282" Feb 28 13:21:05 crc kubenswrapper[4897]: I0228 13:21:05.804781 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qxsqd" podStartSLOduration=7.014291126 podStartE2EDuration="38.804766391s" podCreationTimestamp="2026-02-28 13:20:27 +0000 UTC" firstStartedPulling="2026-02-28 13:20:33.68264735 +0000 UTC m=+247.924968037" lastFinishedPulling="2026-02-28 13:21:05.473122645 +0000 UTC m=+279.715443302" observedRunningTime="2026-02-28 13:21:05.802958965 +0000 UTC m=+280.045279632" watchObservedRunningTime="2026-02-28 13:21:05.804766391 +0000 UTC m=+280.047087048" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.150854 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.165683 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.177903 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.177944 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.186941 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8877a37d-8255-48c2-ae08-424110bce430-kube-api-access\") pod \"8877a37d-8255-48c2-ae08-424110bce430\" (UID: \"8877a37d-8255-48c2-ae08-424110bce430\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.187044 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9xph\" (UniqueName: \"kubernetes.io/projected/a52c7385-4178-4038-93b0-5cd758958e80-kube-api-access-n9xph\") pod \"a52c7385-4178-4038-93b0-5cd758958e80\" (UID: \"a52c7385-4178-4038-93b0-5cd758958e80\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.187132 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8877a37d-8255-48c2-ae08-424110bce430-kubelet-dir\") pod \"8877a37d-8255-48c2-ae08-424110bce430\" (UID: \"8877a37d-8255-48c2-ae08-424110bce430\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.187485 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8877a37d-8255-48c2-ae08-424110bce430-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8877a37d-8255-48c2-ae08-424110bce430" (UID: "8877a37d-8255-48c2-ae08-424110bce430"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.208030 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8877a37d-8255-48c2-ae08-424110bce430-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8877a37d-8255-48c2-ae08-424110bce430" (UID: "8877a37d-8255-48c2-ae08-424110bce430"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.208105 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52c7385-4178-4038-93b0-5cd758958e80-kube-api-access-n9xph" (OuterVolumeSpecName: "kube-api-access-n9xph") pod "a52c7385-4178-4038-93b0-5cd758958e80" (UID: "a52c7385-4178-4038-93b0-5cd758958e80"). InnerVolumeSpecName "kube-api-access-n9xph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.288210 4897 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8877a37d-8255-48c2-ae08-424110bce430-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.288241 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8877a37d-8255-48c2-ae08-424110bce430-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.288255 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9xph\" (UniqueName: \"kubernetes.io/projected/a52c7385-4178-4038-93b0-5cd758958e80-kube-api-access-n9xph\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.290054 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.309115 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.389794 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-serving-cert\") pod \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.389852 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-proxy-ca-bundles\") pod \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.389922 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-config\") pod \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.389951 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-client-ca\") pod \"2b1f2de5-5f60-475e-b75d-597c33c23110\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.389977 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-kube-api-access-n6sqt\") pod \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.390001 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tpmw\" (UniqueName: \"kubernetes.io/projected/2b1f2de5-5f60-475e-b75d-597c33c23110-kube-api-access-2tpmw\") pod \"2b1f2de5-5f60-475e-b75d-597c33c23110\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.390061 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-config\") pod \"2b1f2de5-5f60-475e-b75d-597c33c23110\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.390088 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-client-ca\") pod \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\" (UID: \"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.390122 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1f2de5-5f60-475e-b75d-597c33c23110-serving-cert\") pod \"2b1f2de5-5f60-475e-b75d-597c33c23110\" (UID: \"2b1f2de5-5f60-475e-b75d-597c33c23110\") " Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.391129 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-config" (OuterVolumeSpecName: "config") pod "2b1f2de5-5f60-475e-b75d-597c33c23110" (UID: "2b1f2de5-5f60-475e-b75d-597c33c23110"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.391246 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" (UID: "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.391465 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" (UID: "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.391455 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-config" (OuterVolumeSpecName: "config") pod "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" (UID: "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.391695 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b1f2de5-5f60-475e-b75d-597c33c23110" (UID: "2b1f2de5-5f60-475e-b75d-597c33c23110"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.394827 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b1f2de5-5f60-475e-b75d-597c33c23110-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b1f2de5-5f60-475e-b75d-597c33c23110" (UID: "2b1f2de5-5f60-475e-b75d-597c33c23110"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.394897 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" (UID: "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.400787 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-kube-api-access-n6sqt" (OuterVolumeSpecName: "kube-api-access-n6sqt") pod "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" (UID: "e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922"). InnerVolumeSpecName "kube-api-access-n6sqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.404517 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1f2de5-5f60-475e-b75d-597c33c23110-kube-api-access-2tpmw" (OuterVolumeSpecName: "kube-api-access-2tpmw") pod "2b1f2de5-5f60-475e-b75d-597c33c23110" (UID: "2b1f2de5-5f60-475e-b75d-597c33c23110"). InnerVolumeSpecName "kube-api-access-2tpmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:06 crc kubenswrapper[4897]: E0228 13:21:06.473062 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:21:06 crc kubenswrapper[4897]: E0228 13:21:06.473196 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbsfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sv5dr_openshift-marketplace(c8e82c23-54f4-43a4-904b-4f90348580ac): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:21:06 crc kubenswrapper[4897]: E0228 13:21:06.475291 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-sv5dr" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491494 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491526 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491536 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1f2de5-5f60-475e-b75d-597c33c23110-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491544 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491552 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491562 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491570 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b1f2de5-5f60-475e-b75d-597c33c23110-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491579 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922-kube-api-access-n6sqt\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.491589 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tpmw\" (UniqueName: \"kubernetes.io/projected/2b1f2de5-5f60-475e-b75d-597c33c23110-kube-api-access-2tpmw\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.519184 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.519234 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.745667 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8877a37d-8255-48c2-ae08-424110bce430","Type":"ContainerDied","Data":"40b235b6bef96705934f35eecf31841de909ac1f5054ba573149c18baf6486cb"} Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.745993 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b235b6bef96705934f35eecf31841de909ac1f5054ba573149c18baf6486cb" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.745690 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.747216 4897 generic.go:334] "Generic (PLEG): container finished" podID="79743a51-c0b2-45b2-99d3-385e0b2f2c6f" containerID="81f88a37fe90da7973932a2d58c459ef49b3e4d51447e7d2ceb262c276716b5a" exitCode=0 Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.747274 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" event={"ID":"79743a51-c0b2-45b2-99d3-385e0b2f2c6f","Type":"ContainerDied","Data":"81f88a37fe90da7973932a2d58c459ef49b3e4d51447e7d2ceb262c276716b5a"} Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.749275 4897 generic.go:334] "Generic (PLEG): container finished" podID="2b1f2de5-5f60-475e-b75d-597c33c23110" containerID="45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392" exitCode=0 Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.749358 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" event={"ID":"2b1f2de5-5f60-475e-b75d-597c33c23110","Type":"ContainerDied","Data":"45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392"} Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.749379 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" event={"ID":"2b1f2de5-5f60-475e-b75d-597c33c23110","Type":"ContainerDied","Data":"28c1a25f8f155d2565400cf2549f5fae95e94f9de56f974111a4dacaf92acc5a"} Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.749408 4897 scope.go:117] "RemoveContainer" containerID="45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.749523 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.753194 4897 generic.go:334] "Generic (PLEG): container finished" podID="e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" containerID="9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64" exitCode=0 Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.753285 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.753450 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" event={"ID":"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922","Type":"ContainerDied","Data":"9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64"} Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.753472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5c958b59-g78sr" event={"ID":"e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922","Type":"ContainerDied","Data":"dbd891aaf9e346c71efa281ab602b8871bbb2442b44ba2602dce4eb678229e9c"} Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.759817 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" event={"ID":"a52c7385-4178-4038-93b0-5cd758958e80","Type":"ContainerDied","Data":"b770bf9e791e0fcc0cd4ea675f2af48d6ce5428666c4097a80bb6978fcbb8065"} Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.759872 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b770bf9e791e0fcc0cd4ea675f2af48d6ce5428666c4097a80bb6978fcbb8065" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.759981 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538080-qcrrw" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.766629 4897 scope.go:117] "RemoveContainer" containerID="45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392" Feb 28 13:21:06 crc kubenswrapper[4897]: E0228 13:21:06.767645 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392\": container with ID starting with 45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392 not found: ID does not exist" containerID="45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.767697 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392"} err="failed to get container status \"45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392\": rpc error: code = NotFound desc = could not find container \"45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392\": container with ID starting with 45a7923d869ae4749796fbca58366bbce11b37bd534687d9edabc79c17cb5392 not found: ID does not exist" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.767725 4897 scope.go:117] "RemoveContainer" containerID="9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.784640 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.784621218 podStartE2EDuration="2.784621218s" podCreationTimestamp="2026-02-28 13:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:21:06.781988222 +0000 UTC m=+281.024308879" watchObservedRunningTime="2026-02-28 13:21:06.784621218 +0000 UTC m=+281.026941875" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.784898 4897 scope.go:117] "RemoveContainer" containerID="9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64" Feb 28 13:21:06 crc kubenswrapper[4897]: E0228 13:21:06.785374 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64\": container with ID starting with 9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64 not found: ID does not exist" containerID="9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.785414 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64"} err="failed to get container status \"9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64\": rpc error: code = NotFound desc = could not find container \"9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64\": container with ID starting with 9dc717f2a53f14b966c2befda7641f13f71e6dd5feaf8df937424c547e7f9a64 not found: ID does not exist" Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.806037 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c5c958b59-g78sr"] Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.811481 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c5c958b59-g78sr"] Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.822097 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9"] Feb 28 13:21:06 crc kubenswrapper[4897]: I0228 13:21:06.827052 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-547f5899ff-gjxf9"] Feb 28 13:21:07 crc kubenswrapper[4897]: I0228 13:21:07.347358 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-j4slc" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="registry-server" probeResult="failure" output=< Feb 28 13:21:07 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:21:07 crc kubenswrapper[4897]: > Feb 28 13:21:07 crc kubenswrapper[4897]: I0228 13:21:07.565140 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-78cjq" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="registry-server" probeResult="failure" output=< Feb 28 13:21:07 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:21:07 crc kubenswrapper[4897]: > Feb 28 13:21:07 crc kubenswrapper[4897]: I0228 13:21:07.933149 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:21:07 crc kubenswrapper[4897]: I0228 13:21:07.933227 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.047367 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.109147 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntptc\" (UniqueName: \"kubernetes.io/projected/79743a51-c0b2-45b2-99d3-385e0b2f2c6f-kube-api-access-ntptc\") pod \"79743a51-c0b2-45b2-99d3-385e0b2f2c6f\" (UID: \"79743a51-c0b2-45b2-99d3-385e0b2f2c6f\") " Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.118451 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79743a51-c0b2-45b2-99d3-385e0b2f2c6f-kube-api-access-ntptc" (OuterVolumeSpecName: "kube-api-access-ntptc") pod "79743a51-c0b2-45b2-99d3-385e0b2f2c6f" (UID: "79743a51-c0b2-45b2-99d3-385e0b2f2c6f"). InnerVolumeSpecName "kube-api-access-ntptc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.210848 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntptc\" (UniqueName: \"kubernetes.io/projected/79743a51-c0b2-45b2-99d3-385e0b2f2c6f-kube-api-access-ntptc\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.291401 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c74d587d6-rxmx2"] Feb 28 13:21:08 crc kubenswrapper[4897]: E0228 13:21:08.291866 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1f2de5-5f60-475e-b75d-597c33c23110" containerName="route-controller-manager" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.291881 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1f2de5-5f60-475e-b75d-597c33c23110" containerName="route-controller-manager" Feb 28 13:21:08 crc kubenswrapper[4897]: E0228 13:21:08.292016 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8877a37d-8255-48c2-ae08-424110bce430" containerName="pruner" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.292023 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8877a37d-8255-48c2-ae08-424110bce430" containerName="pruner" Feb 28 13:21:08 crc kubenswrapper[4897]: E0228 13:21:08.292044 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52c7385-4178-4038-93b0-5cd758958e80" containerName="oc" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.292051 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52c7385-4178-4038-93b0-5cd758958e80" containerName="oc" Feb 28 13:21:08 crc kubenswrapper[4897]: E0228 13:21:08.292065 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" containerName="controller-manager" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.292073 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" containerName="controller-manager" Feb 28 13:21:08 crc kubenswrapper[4897]: E0228 13:21:08.292146 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79743a51-c0b2-45b2-99d3-385e0b2f2c6f" containerName="oc" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.292152 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="79743a51-c0b2-45b2-99d3-385e0b2f2c6f" containerName="oc" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.294867 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="79743a51-c0b2-45b2-99d3-385e0b2f2c6f" containerName="oc" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.294895 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8877a37d-8255-48c2-ae08-424110bce430" containerName="pruner" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.294907 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1f2de5-5f60-475e-b75d-597c33c23110" containerName="route-controller-manager" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.294922 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52c7385-4178-4038-93b0-5cd758958e80" containerName="oc" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.294933 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" containerName="controller-manager" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.298848 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.303204 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd"] Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.319392 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.320037 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-client-ca\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.320117 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44p55\" (UniqueName: \"kubernetes.io/projected/3ef129d7-7447-49a2-91f2-ce21f4195a5e-kube-api-access-44p55\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.320152 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-config\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.320414 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef129d7-7447-49a2-91f2-ce21f4195a5e-serving-cert\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.320442 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-proxy-ca-bundles\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.320856 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.320975 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.320977 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.321252 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.321544 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.321736 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.325341 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.325785 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.326195 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.328158 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c74d587d6-rxmx2"] Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.330581 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.330844 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.330947 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.331040 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.348376 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd"] Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.421952 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-config\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.422032 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cddab55-a16b-4a91-a241-354b72f176b9-serving-cert\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.422369 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-client-ca\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.422447 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef129d7-7447-49a2-91f2-ce21f4195a5e-serving-cert\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.422499 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-proxy-ca-bundles\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.422575 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldvm\" (UniqueName: \"kubernetes.io/projected/9cddab55-a16b-4a91-a241-354b72f176b9-kube-api-access-wldvm\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.422612 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-client-ca\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.422643 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44p55\" (UniqueName: \"kubernetes.io/projected/3ef129d7-7447-49a2-91f2-ce21f4195a5e-kube-api-access-44p55\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.422691 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-config\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.424340 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-client-ca\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.424383 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-proxy-ca-bundles\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.424701 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-config\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.432265 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef129d7-7447-49a2-91f2-ce21f4195a5e-serving-cert\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.451999 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44p55\" (UniqueName: \"kubernetes.io/projected/3ef129d7-7447-49a2-91f2-ce21f4195a5e-kube-api-access-44p55\") pod \"controller-manager-5c74d587d6-rxmx2\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.465653 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1f2de5-5f60-475e-b75d-597c33c23110" path="/var/lib/kubelet/pods/2b1f2de5-5f60-475e-b75d-597c33c23110/volumes" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.466371 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922" path="/var/lib/kubelet/pods/e5c2b9a6-accb-47c5-8d99-7c5d6fd0e922/volumes" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.524198 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldvm\" (UniqueName: \"kubernetes.io/projected/9cddab55-a16b-4a91-a241-354b72f176b9-kube-api-access-wldvm\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.524266 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-config\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.524291 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cddab55-a16b-4a91-a241-354b72f176b9-serving-cert\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.524328 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-client-ca\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.525181 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-client-ca\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.525841 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-config\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.530998 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cddab55-a16b-4a91-a241-354b72f176b9-serving-cert\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.552013 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldvm\" (UniqueName: \"kubernetes.io/projected/9cddab55-a16b-4a91-a241-354b72f176b9-kube-api-access-wldvm\") pod \"route-controller-manager-6f577f78fd-kl8dd\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.643567 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.646075 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.781021 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" event={"ID":"79743a51-c0b2-45b2-99d3-385e0b2f2c6f","Type":"ContainerDied","Data":"6e4d4d4cf90394f6789f2122a9371c916a9dfa97bae501d213646b0008c77525"} Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.781114 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e4d4d4cf90394f6789f2122a9371c916a9dfa97bae501d213646b0008c77525" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.781120 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538078-hj8mj" Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.936240 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c74d587d6-rxmx2"] Feb 28 13:21:08 crc kubenswrapper[4897]: W0228 13:21:08.944395 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ef129d7_7447_49a2_91f2_ce21f4195a5e.slice/crio-3d4a2738f03c5c33b8662640b60732ca69b33a88d828e46f2c2ae15afe3a5ac1 WatchSource:0}: Error finding container 3d4a2738f03c5c33b8662640b60732ca69b33a88d828e46f2c2ae15afe3a5ac1: Status 404 returned error can't find the container with id 3d4a2738f03c5c33b8662640b60732ca69b33a88d828e46f2c2ae15afe3a5ac1 Feb 28 13:21:08 crc kubenswrapper[4897]: I0228 13:21:08.996502 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd"] Feb 28 13:21:09 crc kubenswrapper[4897]: I0228 13:21:09.001469 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qxsqd" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="registry-server" probeResult="failure" output=< Feb 28 13:21:09 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:21:09 crc kubenswrapper[4897]: > Feb 28 13:21:09 crc kubenswrapper[4897]: W0228 13:21:09.003616 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cddab55_a16b_4a91_a241_354b72f176b9.slice/crio-37bc6a0128ec6a54d8112022eebde68ea30b32c88356eadc45c383ed58938b4a WatchSource:0}: Error finding container 37bc6a0128ec6a54d8112022eebde68ea30b32c88356eadc45c383ed58938b4a: Status 404 returned error can't find the container with id 37bc6a0128ec6a54d8112022eebde68ea30b32c88356eadc45c383ed58938b4a Feb 28 13:21:09 crc kubenswrapper[4897]: I0228 13:21:09.791024 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" event={"ID":"9cddab55-a16b-4a91-a241-354b72f176b9","Type":"ContainerStarted","Data":"edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72"} Feb 28 13:21:09 crc kubenswrapper[4897]: I0228 13:21:09.791094 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" event={"ID":"9cddab55-a16b-4a91-a241-354b72f176b9","Type":"ContainerStarted","Data":"37bc6a0128ec6a54d8112022eebde68ea30b32c88356eadc45c383ed58938b4a"} Feb 28 13:21:09 crc kubenswrapper[4897]: I0228 13:21:09.793558 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" event={"ID":"3ef129d7-7447-49a2-91f2-ce21f4195a5e","Type":"ContainerStarted","Data":"3d4a2738f03c5c33b8662640b60732ca69b33a88d828e46f2c2ae15afe3a5ac1"} Feb 28 13:21:10 crc kubenswrapper[4897]: I0228 13:21:10.802233 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" event={"ID":"3ef129d7-7447-49a2-91f2-ce21f4195a5e","Type":"ContainerStarted","Data":"3f814b4e437ffcf509b869e5b317e8895d2acacd3857f525e95d83316c6b6325"} Feb 28 13:21:10 crc kubenswrapper[4897]: I0228 13:21:10.803213 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:10 crc kubenswrapper[4897]: I0228 13:21:10.803337 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:10 crc kubenswrapper[4897]: I0228 13:21:10.809664 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:21:10 crc kubenswrapper[4897]: I0228 13:21:10.810141 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:21:10 crc kubenswrapper[4897]: I0228 13:21:10.825783 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" podStartSLOduration=6.825768141 podStartE2EDuration="6.825768141s" podCreationTimestamp="2026-02-28 13:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:21:10.823642058 +0000 UTC m=+285.065962715" watchObservedRunningTime="2026-02-28 13:21:10.825768141 +0000 UTC m=+285.068088798" Feb 28 13:21:10 crc kubenswrapper[4897]: I0228 13:21:10.849597 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" podStartSLOduration=6.84957447 podStartE2EDuration="6.84957447s" podCreationTimestamp="2026-02-28 13:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:21:10.838642525 +0000 UTC m=+285.080963192" watchObservedRunningTime="2026-02-28 13:21:10.84957447 +0000 UTC m=+285.091895127" Feb 28 13:21:14 crc kubenswrapper[4897]: I0228 13:21:14.207399 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:21:14 crc kubenswrapper[4897]: I0228 13:21:14.209089 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:21:14 crc kubenswrapper[4897]: I0228 13:21:14.264800 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:21:14 crc kubenswrapper[4897]: I0228 13:21:14.883237 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:21:16 crc kubenswrapper[4897]: I0228 13:21:16.240600 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:21:16 crc kubenswrapper[4897]: I0228 13:21:16.282806 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:21:16 crc kubenswrapper[4897]: I0228 13:21:16.588391 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:21:16 crc kubenswrapper[4897]: I0228 13:21:16.647263 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:21:17 crc kubenswrapper[4897]: I0228 13:21:17.977973 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:21:18 crc kubenswrapper[4897]: I0228 13:21:18.027150 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:21:18 crc kubenswrapper[4897]: I0228 13:21:18.895048 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-78cjq"] Feb 28 13:21:18 crc kubenswrapper[4897]: I0228 13:21:18.895324 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-78cjq" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="registry-server" containerID="cri-o://ccdc6d7f258364deceeac741fe44bd4ab957785aa67728fbee3350dec13779ce" gracePeriod=2 Feb 28 13:21:19 crc kubenswrapper[4897]: I0228 13:21:19.867661 4897 generic.go:334] "Generic (PLEG): container finished" podID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerID="ccdc6d7f258364deceeac741fe44bd4ab957785aa67728fbee3350dec13779ce" exitCode=0 Feb 28 13:21:19 crc kubenswrapper[4897]: I0228 13:21:19.867869 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78cjq" event={"ID":"488e35b2-95c6-4499-be2f-5a2d15cdf5d4","Type":"ContainerDied","Data":"ccdc6d7f258364deceeac741fe44bd4ab957785aa67728fbee3350dec13779ce"} Feb 28 13:21:20 crc kubenswrapper[4897]: E0228 13:21:20.075151 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sv5dr" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.698176 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxsqd"] Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.699206 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qxsqd" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="registry-server" containerID="cri-o://908b942f242e9e943991a7f708eeef977013caecb7cf540ab42ca24fe731469b" gracePeriod=2 Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.706995 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.837184 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-catalog-content\") pod \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.837268 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-utilities\") pod \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.837375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7cv6\" (UniqueName: \"kubernetes.io/projected/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-kube-api-access-l7cv6\") pod \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\" (UID: \"488e35b2-95c6-4499-be2f-5a2d15cdf5d4\") " Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.838229 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-utilities" (OuterVolumeSpecName: "utilities") pod "488e35b2-95c6-4499-be2f-5a2d15cdf5d4" (UID: "488e35b2-95c6-4499-be2f-5a2d15cdf5d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.844020 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-kube-api-access-l7cv6" (OuterVolumeSpecName: "kube-api-access-l7cv6") pod "488e35b2-95c6-4499-be2f-5a2d15cdf5d4" (UID: "488e35b2-95c6-4499-be2f-5a2d15cdf5d4"). InnerVolumeSpecName "kube-api-access-l7cv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.868891 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "488e35b2-95c6-4499-be2f-5a2d15cdf5d4" (UID: "488e35b2-95c6-4499-be2f-5a2d15cdf5d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.876772 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78cjq" event={"ID":"488e35b2-95c6-4499-be2f-5a2d15cdf5d4","Type":"ContainerDied","Data":"52784002cf7dc5ad99990469b2235a950f38ac3917c63b25d3600f515726bdc3"} Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.876820 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78cjq" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.876870 4897 scope.go:117] "RemoveContainer" containerID="ccdc6d7f258364deceeac741fe44bd4ab957785aa67728fbee3350dec13779ce" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.900293 4897 scope.go:117] "RemoveContainer" containerID="8ef8f3a76e0ad39edf9489b3ef6e4f307bcfc81847722d50808fc3684a9310c7" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.910821 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-78cjq"] Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.918998 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-78cjq"] Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.928508 4897 scope.go:117] "RemoveContainer" containerID="70606a1da6074a82386ff1f801b04303b4ac060b99e46385ab8a1af82f9e1156" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.939539 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.939581 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:20 crc kubenswrapper[4897]: I0228 13:21:20.939602 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7cv6\" (UniqueName: \"kubernetes.io/projected/488e35b2-95c6-4499-be2f-5a2d15cdf5d4-kube-api-access-l7cv6\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:21 crc kubenswrapper[4897]: I0228 13:21:21.886230 4897 generic.go:334] "Generic (PLEG): container finished" podID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerID="908b942f242e9e943991a7f708eeef977013caecb7cf540ab42ca24fe731469b" exitCode=0 Feb 28 13:21:21 crc kubenswrapper[4897]: I0228 13:21:21.886383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxsqd" event={"ID":"a0865f08-bed5-4fbb-ab37-582862fb0616","Type":"ContainerDied","Data":"908b942f242e9e943991a7f708eeef977013caecb7cf540ab42ca24fe731469b"} Feb 28 13:21:21 crc kubenswrapper[4897]: I0228 13:21:21.889304 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whbtd" event={"ID":"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d","Type":"ContainerStarted","Data":"2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93"} Feb 28 13:21:21 crc kubenswrapper[4897]: I0228 13:21:21.891083 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9d2n" event={"ID":"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5","Type":"ContainerStarted","Data":"36bee71be4b1a87f45a58177cecb959e13c93e33604e5adc5308fed3e67f5415"} Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.236944 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.256155 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-utilities\") pod \"a0865f08-bed5-4fbb-ab37-582862fb0616\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.256227 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-catalog-content\") pod \"a0865f08-bed5-4fbb-ab37-582862fb0616\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.256270 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pt6h\" (UniqueName: \"kubernetes.io/projected/a0865f08-bed5-4fbb-ab37-582862fb0616-kube-api-access-4pt6h\") pod \"a0865f08-bed5-4fbb-ab37-582862fb0616\" (UID: \"a0865f08-bed5-4fbb-ab37-582862fb0616\") " Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.257185 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-utilities" (OuterVolumeSpecName: "utilities") pod "a0865f08-bed5-4fbb-ab37-582862fb0616" (UID: "a0865f08-bed5-4fbb-ab37-582862fb0616"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.267657 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0865f08-bed5-4fbb-ab37-582862fb0616-kube-api-access-4pt6h" (OuterVolumeSpecName: "kube-api-access-4pt6h") pod "a0865f08-bed5-4fbb-ab37-582862fb0616" (UID: "a0865f08-bed5-4fbb-ab37-582862fb0616"). InnerVolumeSpecName "kube-api-access-4pt6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.358064 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.358166 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pt6h\" (UniqueName: \"kubernetes.io/projected/a0865f08-bed5-4fbb-ab37-582862fb0616-kube-api-access-4pt6h\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.393594 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0865f08-bed5-4fbb-ab37-582862fb0616" (UID: "a0865f08-bed5-4fbb-ab37-582862fb0616"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.460264 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0865f08-bed5-4fbb-ab37-582862fb0616-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.463483 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" path="/var/lib/kubelet/pods/488e35b2-95c6-4499-be2f-5a2d15cdf5d4/volumes" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.899812 4897 generic.go:334] "Generic (PLEG): container finished" podID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerID="36bee71be4b1a87f45a58177cecb959e13c93e33604e5adc5308fed3e67f5415" exitCode=0 Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.899978 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9d2n" event={"ID":"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5","Type":"ContainerDied","Data":"36bee71be4b1a87f45a58177cecb959e13c93e33604e5adc5308fed3e67f5415"} Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.903668 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxsqd" event={"ID":"a0865f08-bed5-4fbb-ab37-582862fb0616","Type":"ContainerDied","Data":"38eea4249cb1b95915ee141b34ba9ab3088117c538a8d32d694fb368666550ae"} Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.903717 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxsqd" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.903743 4897 scope.go:117] "RemoveContainer" containerID="908b942f242e9e943991a7f708eeef977013caecb7cf540ab42ca24fe731469b" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.910160 4897 generic.go:334] "Generic (PLEG): container finished" podID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerID="2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93" exitCode=0 Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.910189 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whbtd" event={"ID":"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d","Type":"ContainerDied","Data":"2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93"} Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.941056 4897 scope.go:117] "RemoveContainer" containerID="39c690dd4eaa0f2b66fe44ca9ed86bcc6d3a57f2a117be0314e7ac60f7d7bc28" Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.948078 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxsqd"] Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.948135 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qxsqd"] Feb 28 13:21:22 crc kubenswrapper[4897]: I0228 13:21:22.985996 4897 scope.go:117] "RemoveContainer" containerID="55729215e7744bd24ed0f0fb9f35e8a428629eca1ab6ce299a83e1f8a3b60d67" Feb 28 13:21:23 crc kubenswrapper[4897]: I0228 13:21:23.575165 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-84hkx"] Feb 28 13:21:23 crc kubenswrapper[4897]: I0228 13:21:23.918552 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whbtd" event={"ID":"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d","Type":"ContainerStarted","Data":"b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45"} Feb 28 13:21:23 crc kubenswrapper[4897]: I0228 13:21:23.920110 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9d2n" event={"ID":"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5","Type":"ContainerStarted","Data":"ce4cbc9b3c4faed6042191200756ad2b2cc18b61d8f9e03d09067535b16a9a92"} Feb 28 13:21:23 crc kubenswrapper[4897]: I0228 13:21:23.937686 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-whbtd" podStartSLOduration=2.6021013870000003 podStartE2EDuration="59.937667273s" podCreationTimestamp="2026-02-28 13:20:24 +0000 UTC" firstStartedPulling="2026-02-28 13:20:26.228987828 +0000 UTC m=+240.471308485" lastFinishedPulling="2026-02-28 13:21:23.564553714 +0000 UTC m=+297.806874371" observedRunningTime="2026-02-28 13:21:23.934210626 +0000 UTC m=+298.176531283" watchObservedRunningTime="2026-02-28 13:21:23.937667273 +0000 UTC m=+298.179987930" Feb 28 13:21:23 crc kubenswrapper[4897]: I0228 13:21:23.949350 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q9d2n" podStartSLOduration=2.67451939 podStartE2EDuration="59.949334647s" podCreationTimestamp="2026-02-28 13:20:24 +0000 UTC" firstStartedPulling="2026-02-28 13:20:26.265941628 +0000 UTC m=+240.508262295" lastFinishedPulling="2026-02-28 13:21:23.540756875 +0000 UTC m=+297.783077552" observedRunningTime="2026-02-28 13:21:23.949139002 +0000 UTC m=+298.191459669" watchObservedRunningTime="2026-02-28 13:21:23.949334647 +0000 UTC m=+298.191655304" Feb 28 13:21:24 crc kubenswrapper[4897]: I0228 13:21:24.463628 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" path="/var/lib/kubelet/pods/a0865f08-bed5-4fbb-ab37-582862fb0616/volumes" Feb 28 13:21:24 crc kubenswrapper[4897]: I0228 13:21:24.651074 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:21:24 crc kubenswrapper[4897]: I0228 13:21:24.651125 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:21:24 crc kubenswrapper[4897]: I0228 13:21:24.826114 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:21:24 crc kubenswrapper[4897]: I0228 13:21:24.826185 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:21:25 crc kubenswrapper[4897]: I0228 13:21:25.700777 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q9d2n" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="registry-server" probeResult="failure" output=< Feb 28 13:21:25 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:21:25 crc kubenswrapper[4897]: > Feb 28 13:21:25 crc kubenswrapper[4897]: I0228 13:21:25.864192 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-whbtd" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="registry-server" probeResult="failure" output=< Feb 28 13:21:25 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:21:25 crc kubenswrapper[4897]: > Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.370625 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.371395 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.371461 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.372928 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.372998 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d" gracePeriod=600 Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.975527 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d" exitCode=0 Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.975578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d"} Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.975801 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"2066137a00e095b0ce2896f3008520e157182f7fcabc5b0857bfc026f772801b"} Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.977360 4897 generic.go:334] "Generic (PLEG): container finished" podID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerID="d1d5d17426d9b6bd37c8e5c70181b7917955da21d4eeffe81d8ab9ed62f04a8f" exitCode=0 Feb 28 13:21:33 crc kubenswrapper[4897]: I0228 13:21:33.977386 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv5dr" event={"ID":"c8e82c23-54f4-43a4-904b-4f90348580ac","Type":"ContainerDied","Data":"d1d5d17426d9b6bd37c8e5c70181b7917955da21d4eeffe81d8ab9ed62f04a8f"} Feb 28 13:21:34 crc kubenswrapper[4897]: I0228 13:21:34.717645 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:21:34 crc kubenswrapper[4897]: I0228 13:21:34.756211 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:21:34 crc kubenswrapper[4897]: I0228 13:21:34.883244 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:21:34 crc kubenswrapper[4897]: I0228 13:21:34.925623 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:21:34 crc kubenswrapper[4897]: I0228 13:21:34.989675 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv5dr" event={"ID":"c8e82c23-54f4-43a4-904b-4f90348580ac","Type":"ContainerStarted","Data":"e47307b7d312832ba3be229fcca49d16ab0eed8540702ef986fb2e62b72aff0d"} Feb 28 13:21:35 crc kubenswrapper[4897]: I0228 13:21:35.009049 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sv5dr" podStartSLOduration=2.505649779 podStartE2EDuration="1m11.009030356s" podCreationTimestamp="2026-02-28 13:20:24 +0000 UTC" firstStartedPulling="2026-02-28 13:20:26.268399759 +0000 UTC m=+240.510720416" lastFinishedPulling="2026-02-28 13:21:34.771780326 +0000 UTC m=+309.014100993" observedRunningTime="2026-02-28 13:21:35.00800781 +0000 UTC m=+309.250328467" watchObservedRunningTime="2026-02-28 13:21:35.009030356 +0000 UTC m=+309.251351013" Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.111837 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-whbtd"] Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.112386 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-whbtd" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="registry-server" containerID="cri-o://b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45" gracePeriod=2 Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.603946 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.666226 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vmpb\" (UniqueName: \"kubernetes.io/projected/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-kube-api-access-9vmpb\") pod \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.666337 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-utilities\") pod \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.666397 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-catalog-content\") pod \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\" (UID: \"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d\") " Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.667209 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-utilities" (OuterVolumeSpecName: "utilities") pod "fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" (UID: "fa5ec60f-f348-43b8-8ef2-9caafd08cb0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.678104 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-kube-api-access-9vmpb" (OuterVolumeSpecName: "kube-api-access-9vmpb") pod "fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" (UID: "fa5ec60f-f348-43b8-8ef2-9caafd08cb0d"). InnerVolumeSpecName "kube-api-access-9vmpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.730803 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" (UID: "fa5ec60f-f348-43b8-8ef2-9caafd08cb0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.766950 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vmpb\" (UniqueName: \"kubernetes.io/projected/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-kube-api-access-9vmpb\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.766982 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:37 crc kubenswrapper[4897]: I0228 13:21:37.766994 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.018241 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj92z" event={"ID":"1acb2f9f-f650-4f19-965e-48ba5a1ddac2","Type":"ContainerStarted","Data":"8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583"} Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.020382 4897 generic.go:334] "Generic (PLEG): container finished" podID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerID="b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45" exitCode=0 Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.020424 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whbtd" event={"ID":"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d","Type":"ContainerDied","Data":"b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45"} Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.020453 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whbtd" event={"ID":"fa5ec60f-f348-43b8-8ef2-9caafd08cb0d","Type":"ContainerDied","Data":"925fd28ec78fc1e178167cff0f0fe566e81a460fc690648155f45905fa143b7b"} Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.020470 4897 scope.go:117] "RemoveContainer" containerID="b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.020515 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whbtd" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.035633 4897 scope.go:117] "RemoveContainer" containerID="2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.054264 4897 scope.go:117] "RemoveContainer" containerID="dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.066873 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-whbtd"] Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.072561 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-whbtd"] Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.081952 4897 scope.go:117] "RemoveContainer" containerID="b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45" Feb 28 13:21:38 crc kubenswrapper[4897]: E0228 13:21:38.082380 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45\": container with ID starting with b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45 not found: ID does not exist" containerID="b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.082501 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45"} err="failed to get container status \"b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45\": rpc error: code = NotFound desc = could not find container \"b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45\": container with ID starting with b0d8ce8d3ea5500459ce0969b08e6ce9296d781d7dc06b378b71b0a1d4bd3e45 not found: ID does not exist" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.082587 4897 scope.go:117] "RemoveContainer" containerID="2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93" Feb 28 13:21:38 crc kubenswrapper[4897]: E0228 13:21:38.082970 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93\": container with ID starting with 2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93 not found: ID does not exist" containerID="2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.083050 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93"} err="failed to get container status \"2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93\": rpc error: code = NotFound desc = could not find container \"2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93\": container with ID starting with 2ee52cddfadec3fc25b4374df01dec742c50310d01c8a68e2b2317a624b76f93 not found: ID does not exist" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.083118 4897 scope.go:117] "RemoveContainer" containerID="dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08" Feb 28 13:21:38 crc kubenswrapper[4897]: E0228 13:21:38.083600 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08\": container with ID starting with dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08 not found: ID does not exist" containerID="dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.083688 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08"} err="failed to get container status \"dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08\": rpc error: code = NotFound desc = could not find container \"dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08\": container with ID starting with dc4cd0f9d5fe8614315e6d04a3abc018f5ab5d418088ee99b0895ab2e93b2f08 not found: ID does not exist" Feb 28 13:21:38 crc kubenswrapper[4897]: I0228 13:21:38.462770 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" path="/var/lib/kubelet/pods/fa5ec60f-f348-43b8-8ef2-9caafd08cb0d/volumes" Feb 28 13:21:38 crc kubenswrapper[4897]: E0228 13:21:38.639508 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1acb2f9f_f650_4f19_965e_48ba5a1ddac2.slice/crio-8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1acb2f9f_f650_4f19_965e_48ba5a1ddac2.slice/crio-conmon-8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583.scope\": RecentStats: unable to find data in memory cache]" Feb 28 13:21:39 crc kubenswrapper[4897]: I0228 13:21:39.028398 4897 generic.go:334] "Generic (PLEG): container finished" podID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerID="8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583" exitCode=0 Feb 28 13:21:39 crc kubenswrapper[4897]: I0228 13:21:39.028497 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj92z" event={"ID":"1acb2f9f-f650-4f19-965e-48ba5a1ddac2","Type":"ContainerDied","Data":"8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583"} Feb 28 13:21:40 crc kubenswrapper[4897]: I0228 13:21:40.034944 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj92z" event={"ID":"1acb2f9f-f650-4f19-965e-48ba5a1ddac2","Type":"ContainerStarted","Data":"e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a"} Feb 28 13:21:40 crc kubenswrapper[4897]: I0228 13:21:40.059121 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wj92z" podStartSLOduration=1.9919830269999999 podStartE2EDuration="1m13.059102137s" podCreationTimestamp="2026-02-28 13:20:27 +0000 UTC" firstStartedPulling="2026-02-28 13:20:28.34749437 +0000 UTC m=+242.589815027" lastFinishedPulling="2026-02-28 13:21:39.41461348 +0000 UTC m=+313.656934137" observedRunningTime="2026-02-28 13:21:40.055573158 +0000 UTC m=+314.297893835" watchObservedRunningTime="2026-02-28 13:21:40.059102137 +0000 UTC m=+314.301422804" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.346936 4897 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.347778 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="extract-utilities" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.347810 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="extract-utilities" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.347841 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="extract-utilities" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.347857 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="extract-utilities" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.347887 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="extract-content" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.347902 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="extract-content" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.347918 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="extract-content" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.347967 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="extract-content" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.348000 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.348017 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.348041 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="extract-utilities" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.348056 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="extract-utilities" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.348078 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.348093 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.348117 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.348132 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.348161 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="extract-content" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.348177 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="extract-content" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.348559 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="488e35b2-95c6-4499-be2f-5a2d15cdf5d4" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.348601 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0865f08-bed5-4fbb-ab37-582862fb0616" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.348625 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5ec60f-f348-43b8-8ef2-9caafd08cb0d" containerName="registry-server" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.349437 4897 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.349989 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b" gracePeriod=15 Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.350281 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.351771 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126" gracePeriod=15 Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.351911 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f" gracePeriod=15 Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.351994 4897 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352061 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488" gracePeriod=15 Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352158 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925" gracePeriod=15 Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352216 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352229 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352238 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352244 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352253 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352259 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352266 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352272 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352278 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352285 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352291 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352297 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352342 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352350 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352360 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352365 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.352372 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352378 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352472 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352482 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352489 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352499 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352507 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352516 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.352525 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.353327 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.353337 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.353428 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.353437 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.358064 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.400762 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.435211 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.435250 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.435301 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.435340 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.435368 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.435432 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.435458 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.435480 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537214 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537264 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537363 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537388 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537415 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537422 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537460 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537490 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537497 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537503 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537533 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537577 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537642 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537621 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537719 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.537744 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: I0228 13:21:43.700695 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:21:43 crc kubenswrapper[4897]: W0228 13:21:43.726486 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f540bb342e083162c8904142f21d25a188f2885f421d32a638f9f90d286b6802 WatchSource:0}: Error finding container f540bb342e083162c8904142f21d25a188f2885f421d32a638f9f90d286b6802: Status 404 returned error can't find the container with id f540bb342e083162c8904142f21d25a188f2885f421d32a638f9f90d286b6802 Feb 28 13:21:43 crc kubenswrapper[4897]: E0228 13:21:43.730425 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18986bbadf9783e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:21:43.729628133 +0000 UTC m=+317.971948790,LastTimestamp:2026-02-28 13:21:43.729628133 +0000 UTC m=+317.971948790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.063498 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.065502 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.066717 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126" exitCode=0 Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.066738 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f" exitCode=0 Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.066746 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488" exitCode=0 Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.066754 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925" exitCode=2 Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.066939 4897 scope.go:117] "RemoveContainer" containerID="3e7c7d3f14f7bac9396328ca9897766b23a581d966e8b30c9d08fa1c7312a08a" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.068642 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f540bb342e083162c8904142f21d25a188f2885f421d32a638f9f90d286b6802"} Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.070942 4897 generic.go:334] "Generic (PLEG): container finished" podID="dce17ac4-0687-4628-beb9-963332095590" containerID="39a50ad07e02aab779a23a17dec69b7882fd93c0d2a181638fefc722681a2084" exitCode=0 Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.070978 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dce17ac4-0687-4628-beb9-963332095590","Type":"ContainerDied","Data":"39a50ad07e02aab779a23a17dec69b7882fd93c0d2a181638fefc722681a2084"} Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.071991 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.072462 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.571641 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.571727 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.622567 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.623454 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.624347 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:44 crc kubenswrapper[4897]: I0228 13:21:44.624891 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.079148 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c"} Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.080153 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.080647 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.082021 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.084416 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.177238 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.178168 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.178705 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.179045 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.562144 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.562859 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.563104 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.563420 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.674457 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-kubelet-dir\") pod \"dce17ac4-0687-4628-beb9-963332095590\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.675344 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-var-lock\") pod \"dce17ac4-0687-4628-beb9-963332095590\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.674601 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dce17ac4-0687-4628-beb9-963332095590" (UID: "dce17ac4-0687-4628-beb9-963332095590"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.675455 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-var-lock" (OuterVolumeSpecName: "var-lock") pod "dce17ac4-0687-4628-beb9-963332095590" (UID: "dce17ac4-0687-4628-beb9-963332095590"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.675476 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dce17ac4-0687-4628-beb9-963332095590-kube-api-access\") pod \"dce17ac4-0687-4628-beb9-963332095590\" (UID: \"dce17ac4-0687-4628-beb9-963332095590\") " Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.675848 4897 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.675914 4897 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dce17ac4-0687-4628-beb9-963332095590-var-lock\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.681156 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce17ac4-0687-4628-beb9-963332095590-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dce17ac4-0687-4628-beb9-963332095590" (UID: "dce17ac4-0687-4628-beb9-963332095590"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.715351 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.716094 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.716848 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.718679 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.719109 4897 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.719566 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.776563 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.776861 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.777005 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.776734 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.776924 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.777037 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.777530 4897 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.777618 4897 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.777694 4897 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:45 crc kubenswrapper[4897]: I0228 13:21:45.777770 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dce17ac4-0687-4628-beb9-963332095590-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.094435 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.094418 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dce17ac4-0687-4628-beb9-963332095590","Type":"ContainerDied","Data":"2c0f407a5bb10fac140d0a45cfa257b01172d63d71054e2eb629c4ef2f5e8462"} Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.094595 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0f407a5bb10fac140d0a45cfa257b01172d63d71054e2eb629c4ef2f5e8462" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.098920 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.100243 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b" exitCode=0 Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.100361 4897 scope.go:117] "RemoveContainer" containerID="fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.100855 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.124425 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.124786 4897 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.125112 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.125427 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.131701 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.131954 4897 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.132185 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.132530 4897 scope.go:117] "RemoveContainer" containerID="ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.132544 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.156605 4897 scope.go:117] "RemoveContainer" containerID="8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.185587 4897 scope.go:117] "RemoveContainer" containerID="7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.212183 4897 scope.go:117] "RemoveContainer" containerID="e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.242882 4897 scope.go:117] "RemoveContainer" containerID="8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.277493 4897 scope.go:117] "RemoveContainer" containerID="fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126" Feb 28 13:21:46 crc kubenswrapper[4897]: E0228 13:21:46.278302 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\": container with ID starting with fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126 not found: ID does not exist" containerID="fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.278378 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126"} err="failed to get container status \"fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\": rpc error: code = NotFound desc = could not find container \"fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126\": container with ID starting with fa16e396e95ee6006937e8299d6b0f6d518b29e4c2137a868f90b616e8ba6126 not found: ID does not exist" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.278431 4897 scope.go:117] "RemoveContainer" containerID="ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f" Feb 28 13:21:46 crc kubenswrapper[4897]: E0228 13:21:46.278956 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\": container with ID starting with ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f not found: ID does not exist" containerID="ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.279071 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f"} err="failed to get container status \"ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\": rpc error: code = NotFound desc = could not find container \"ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f\": container with ID starting with ef24d15ea5697d7a2dc699aecd697383636ce878089ad10cecf947aab02e784f not found: ID does not exist" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.279130 4897 scope.go:117] "RemoveContainer" containerID="8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488" Feb 28 13:21:46 crc kubenswrapper[4897]: E0228 13:21:46.280841 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\": container with ID starting with 8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488 not found: ID does not exist" containerID="8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.280893 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488"} err="failed to get container status \"8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\": rpc error: code = NotFound desc = could not find container \"8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488\": container with ID starting with 8c6edf369a8dfea03ade4c93aff237ceed5292c2eb01fd5c8a0c861f21eaf488 not found: ID does not exist" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.280932 4897 scope.go:117] "RemoveContainer" containerID="7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925" Feb 28 13:21:46 crc kubenswrapper[4897]: E0228 13:21:46.281586 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\": container with ID starting with 7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925 not found: ID does not exist" containerID="7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.281638 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925"} err="failed to get container status \"7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\": rpc error: code = NotFound desc = could not find container \"7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925\": container with ID starting with 7b417aa6e956f6bceb9f7b412db97270a2664121a72c5c9f9aed72439f576925 not found: ID does not exist" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.281679 4897 scope.go:117] "RemoveContainer" containerID="e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b" Feb 28 13:21:46 crc kubenswrapper[4897]: E0228 13:21:46.282534 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\": container with ID starting with e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b not found: ID does not exist" containerID="e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.282609 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b"} err="failed to get container status \"e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\": rpc error: code = NotFound desc = could not find container \"e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b\": container with ID starting with e5b85097ed84c259f845ec3419d2ec28c214cd4b73ce908c25184526ce1dad8b not found: ID does not exist" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.282637 4897 scope.go:117] "RemoveContainer" containerID="8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc" Feb 28 13:21:46 crc kubenswrapper[4897]: E0228 13:21:46.283121 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\": container with ID starting with 8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc not found: ID does not exist" containerID="8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.283175 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc"} err="failed to get container status \"8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\": rpc error: code = NotFound desc = could not find container \"8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc\": container with ID starting with 8e0788661419a0771061174ff4aa835800f89f7bc3d24141dad4ceeaca0544cc not found: ID does not exist" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.459604 4897 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.459935 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.460362 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.460595 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:46 crc kubenswrapper[4897]: I0228 13:21:46.470268 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 28 13:21:47 crc kubenswrapper[4897]: E0228 13:21:47.038362 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18986bbadf9783e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:21:43.729628133 +0000 UTC m=+317.971948790,LastTimestamp:2026-02-28 13:21:43.729628133 +0000 UTC m=+317.971948790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:21:47 crc kubenswrapper[4897]: E0228 13:21:47.497120 4897 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" volumeName="registry-storage" Feb 28 13:21:47 crc kubenswrapper[4897]: I0228 13:21:47.516125 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:21:47 crc kubenswrapper[4897]: I0228 13:21:47.516405 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:21:47 crc kubenswrapper[4897]: I0228 13:21:47.587693 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:21:47 crc kubenswrapper[4897]: I0228 13:21:47.588506 4897 status_manager.go:851] "Failed to get status for pod" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" pod="openshift-marketplace/redhat-operators-wj92z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj92z\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:47 crc kubenswrapper[4897]: I0228 13:21:47.589657 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:47 crc kubenswrapper[4897]: I0228 13:21:47.590549 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:47 crc kubenswrapper[4897]: I0228 13:21:47.591044 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:48 crc kubenswrapper[4897]: I0228 13:21:48.186896 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:21:48 crc kubenswrapper[4897]: I0228 13:21:48.187748 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:48 crc kubenswrapper[4897]: I0228 13:21:48.188500 4897 status_manager.go:851] "Failed to get status for pod" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" pod="openshift-marketplace/redhat-operators-wj92z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj92z\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:48 crc kubenswrapper[4897]: I0228 13:21:48.188959 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:48 crc kubenswrapper[4897]: I0228 13:21:48.189436 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:48 crc kubenswrapper[4897]: I0228 13:21:48.605885 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" containerName="oauth-openshift" containerID="cri-o://0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d" gracePeriod=15 Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.060936 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.062635 4897 status_manager.go:851] "Failed to get status for pod" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-84hkx\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.063052 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.063423 4897 status_manager.go:851] "Failed to get status for pod" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" pod="openshift-marketplace/redhat-operators-wj92z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj92z\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.063868 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.064434 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.122119 4897 generic.go:334] "Generic (PLEG): container finished" podID="49d0a669-bb05-4da5-9e58-789b58c0797b" containerID="0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d" exitCode=0 Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.122271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" event={"ID":"49d0a669-bb05-4da5-9e58-789b58c0797b","Type":"ContainerDied","Data":"0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d"} Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.122362 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.122611 4897 scope.go:117] "RemoveContainer" containerID="0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.122567 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" event={"ID":"49d0a669-bb05-4da5-9e58-789b58c0797b","Type":"ContainerDied","Data":"f54f64c69daf643cb2521a278e762306e8c13eef856758a351d3451b654af0a8"} Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.123366 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.123820 4897 status_manager.go:851] "Failed to get status for pod" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-84hkx\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.124152 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.124529 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmqnk\" (UniqueName: \"kubernetes.io/projected/49d0a669-bb05-4da5-9e58-789b58c0797b-kube-api-access-gmqnk\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.124634 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-session\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.124709 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-dir\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.124800 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-provider-selection\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.124895 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-service-ca\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.124995 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.125121 4897 status_manager.go:851] "Failed to get status for pod" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" pod="openshift-marketplace/redhat-operators-wj92z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj92z\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.125388 4897 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.125548 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.126267 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.132740 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.133121 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d0a669-bb05-4da5-9e58-789b58c0797b-kube-api-access-gmqnk" (OuterVolumeSpecName: "kube-api-access-gmqnk") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "kube-api-access-gmqnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.133868 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.178973 4897 scope.go:117] "RemoveContainer" containerID="0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d" Feb 28 13:21:49 crc kubenswrapper[4897]: E0228 13:21:49.179648 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d\": container with ID starting with 0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d not found: ID does not exist" containerID="0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.179687 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d"} err="failed to get container status \"0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d\": rpc error: code = NotFound desc = could not find container \"0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d\": container with ID starting with 0a78af3864d8877d6e4e7e0a358121e65af3a2fc85213a60474b8d6793b5ce2d not found: ID does not exist" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226048 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-error\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226091 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-cliconfig\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226128 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-login\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226154 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-router-certs\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226180 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-policies\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226207 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-ocp-branding-template\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226228 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-trusted-ca-bundle\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226254 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-idp-0-file-data\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226285 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-serving-cert\") pod \"49d0a669-bb05-4da5-9e58-789b58c0797b\" (UID: \"49d0a669-bb05-4da5-9e58-789b58c0797b\") " Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226507 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226524 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226539 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.226552 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmqnk\" (UniqueName: \"kubernetes.io/projected/49d0a669-bb05-4da5-9e58-789b58c0797b-kube-api-access-gmqnk\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.227111 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.227625 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.227723 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.230342 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.230828 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.230959 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.231146 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.231811 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.232573 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49d0a669-bb05-4da5-9e58-789b58c0797b" (UID: "49d0a669-bb05-4da5-9e58-789b58c0797b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327287 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327331 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327341 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327351 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327360 4897 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327369 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327381 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327390 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.327400 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49d0a669-bb05-4da5-9e58-789b58c0797b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.436970 4897 status_manager.go:851] "Failed to get status for pod" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" pod="openshift-marketplace/redhat-operators-wj92z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj92z\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.437443 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.437771 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.438041 4897 status_manager.go:851] "Failed to get status for pod" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-84hkx\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:49 crc kubenswrapper[4897]: I0228 13:21:49.438429 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:51 crc kubenswrapper[4897]: E0228 13:21:51.300665 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:51 crc kubenswrapper[4897]: E0228 13:21:51.301733 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:51 crc kubenswrapper[4897]: E0228 13:21:51.302183 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:51 crc kubenswrapper[4897]: E0228 13:21:51.302679 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:51 crc kubenswrapper[4897]: E0228 13:21:51.303073 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:51 crc kubenswrapper[4897]: I0228 13:21:51.303103 4897 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 28 13:21:51 crc kubenswrapper[4897]: E0228 13:21:51.303424 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Feb 28 13:21:51 crc kubenswrapper[4897]: E0228 13:21:51.504600 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Feb 28 13:21:51 crc kubenswrapper[4897]: E0228 13:21:51.905664 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Feb 28 13:21:52 crc kubenswrapper[4897]: E0228 13:21:52.706995 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Feb 28 13:21:54 crc kubenswrapper[4897]: E0228 13:21:54.308080 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Feb 28 13:21:56 crc kubenswrapper[4897]: I0228 13:21:56.461108 4897 status_manager.go:851] "Failed to get status for pod" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" pod="openshift-marketplace/redhat-operators-wj92z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj92z\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:56 crc kubenswrapper[4897]: I0228 13:21:56.462076 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:56 crc kubenswrapper[4897]: I0228 13:21:56.462856 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:56 crc kubenswrapper[4897]: I0228 13:21:56.463416 4897 status_manager.go:851] "Failed to get status for pod" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-84hkx\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:56 crc kubenswrapper[4897]: I0228 13:21:56.463913 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:57 crc kubenswrapper[4897]: E0228 13:21:57.040160 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18986bbadf9783e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 13:21:43.729628133 +0000 UTC m=+317.971948790,LastTimestamp:2026-02-28 13:21:43.729628133 +0000 UTC m=+317.971948790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.456220 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.457504 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.458168 4897 status_manager.go:851] "Failed to get status for pod" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-84hkx\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.458948 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.459652 4897 status_manager.go:851] "Failed to get status for pod" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" pod="openshift-marketplace/redhat-operators-wj92z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj92z\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.460148 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.477770 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.477813 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:21:57 crc kubenswrapper[4897]: E0228 13:21:57.478376 4897 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:57 crc kubenswrapper[4897]: I0228 13:21:57.479114 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:57 crc kubenswrapper[4897]: W0228 13:21:57.508916 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-c931c51eff2f31097884130784d9da16a3fb6c2d1d5ebc89a480758cf4dbf5c6 WatchSource:0}: Error finding container c931c51eff2f31097884130784d9da16a3fb6c2d1d5ebc89a480758cf4dbf5c6: Status 404 returned error can't find the container with id c931c51eff2f31097884130784d9da16a3fb6c2d1d5ebc89a480758cf4dbf5c6 Feb 28 13:21:57 crc kubenswrapper[4897]: E0228 13:21:57.509367 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="6.4s" Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.198219 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1b1acf96cffeabaa695740caa587e302e0ab1aa4dacb4382e318f1106a02715d"} Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.199075 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.199100 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:21:58 crc kubenswrapper[4897]: E0228 13:21:58.199660 4897 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.199690 4897 status_manager.go:851] "Failed to get status for pod" podUID="dce17ac4-0687-4628-beb9-963332095590" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.200182 4897 status_manager.go:851] "Failed to get status for pod" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" pod="openshift-marketplace/redhat-operators-wj92z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wj92z\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.198160 4897 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1b1acf96cffeabaa695740caa587e302e0ab1aa4dacb4382e318f1106a02715d" exitCode=0 Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.200567 4897 status_manager.go:851] "Failed to get status for pod" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" pod="openshift-marketplace/community-operators-sv5dr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sv5dr\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.200600 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c931c51eff2f31097884130784d9da16a3fb6c2d1d5ebc89a480758cf4dbf5c6"} Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.201037 4897 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:58 crc kubenswrapper[4897]: I0228 13:21:58.201586 4897 status_manager.go:851] "Failed to get status for pod" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" pod="openshift-authentication/oauth-openshift-558db77b4-84hkx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-84hkx\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 28 13:21:59 crc kubenswrapper[4897]: I0228 13:21:59.213976 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"407f286f1fe917472eeec1947e58925267f3a1ea055536ed3942be1d34a0ee0c"} Feb 28 13:21:59 crc kubenswrapper[4897]: I0228 13:21:59.214330 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"273d4fbaa1d6b42818d44631bda9a4cd53cf9cca18f3599cddd2b4b1eead7e40"} Feb 28 13:21:59 crc kubenswrapper[4897]: I0228 13:21:59.214342 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"096197082b7af927057698fa7d036a4fa8603c07771d3c25c871c264f0797630"} Feb 28 13:21:59 crc kubenswrapper[4897]: I0228 13:21:59.225378 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 13:21:59 crc kubenswrapper[4897]: I0228 13:21:59.228733 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 13:21:59 crc kubenswrapper[4897]: I0228 13:21:59.230969 4897 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79" exitCode=1 Feb 28 13:21:59 crc kubenswrapper[4897]: I0228 13:21:59.231016 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79"} Feb 28 13:21:59 crc kubenswrapper[4897]: I0228 13:21:59.231562 4897 scope.go:117] "RemoveContainer" containerID="102c48fdb031f8a25054df3551368150b35145bb18117e8aa50758efa2490b79" Feb 28 13:22:00 crc kubenswrapper[4897]: I0228 13:22:00.240059 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9d62e019ede9ef57efe0a3fab66d23d2ab86a3bfb4f18d016cc9332267627567"} Feb 28 13:22:00 crc kubenswrapper[4897]: I0228 13:22:00.240117 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dc854d13ddea70d3f10289bf2188822bf352edbe5ca2e895b1691c74b4d06c6f"} Feb 28 13:22:00 crc kubenswrapper[4897]: I0228 13:22:00.240450 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:22:00 crc kubenswrapper[4897]: I0228 13:22:00.240476 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:22:00 crc kubenswrapper[4897]: I0228 13:22:00.243265 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 13:22:00 crc kubenswrapper[4897]: I0228 13:22:00.243843 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 13:22:00 crc kubenswrapper[4897]: I0228 13:22:00.243899 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc058090e8845d8f3c26d576c322ea3b92a0e7e2a2f63c645199b2ad1b5b2a8f"} Feb 28 13:22:02 crc kubenswrapper[4897]: I0228 13:22:02.226408 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:22:02 crc kubenswrapper[4897]: I0228 13:22:02.243893 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:22:02 crc kubenswrapper[4897]: I0228 13:22:02.250033 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:22:02 crc kubenswrapper[4897]: I0228 13:22:02.479953 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:22:02 crc kubenswrapper[4897]: I0228 13:22:02.480010 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:22:02 crc kubenswrapper[4897]: I0228 13:22:02.489447 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:22:05 crc kubenswrapper[4897]: I0228 13:22:05.249590 4897 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:22:05 crc kubenswrapper[4897]: I0228 13:22:05.291922 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:22:05 crc kubenswrapper[4897]: I0228 13:22:05.292063 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:22:05 crc kubenswrapper[4897]: I0228 13:22:05.292103 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:22:05 crc kubenswrapper[4897]: I0228 13:22:05.299188 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:22:06 crc kubenswrapper[4897]: I0228 13:22:06.300511 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:22:06 crc kubenswrapper[4897]: I0228 13:22:06.301113 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:22:06 crc kubenswrapper[4897]: I0228 13:22:06.478610 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="363aa4a4-c4d0-49f0-a347-8fdb52766d69" Feb 28 13:22:07 crc kubenswrapper[4897]: I0228 13:22:07.307788 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:22:07 crc kubenswrapper[4897]: I0228 13:22:07.309458 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="38c76969-d16d-46f5-b96a-922ebfb0a5da" Feb 28 13:22:07 crc kubenswrapper[4897]: I0228 13:22:07.312958 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="363aa4a4-c4d0-49f0-a347-8fdb52766d69" Feb 28 13:22:12 crc kubenswrapper[4897]: I0228 13:22:12.230868 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 13:22:14 crc kubenswrapper[4897]: I0228 13:22:14.785564 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 28 13:22:15 crc kubenswrapper[4897]: I0228 13:22:15.379475 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 28 13:22:15 crc kubenswrapper[4897]: I0228 13:22:15.526679 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 28 13:22:15 crc kubenswrapper[4897]: I0228 13:22:15.958396 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 13:22:16 crc kubenswrapper[4897]: I0228 13:22:16.212001 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 28 13:22:16 crc kubenswrapper[4897]: I0228 13:22:16.378608 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 28 13:22:16 crc kubenswrapper[4897]: I0228 13:22:16.400458 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 28 13:22:16 crc kubenswrapper[4897]: I0228 13:22:16.774518 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 28 13:22:17 crc kubenswrapper[4897]: I0228 13:22:17.517880 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 28 13:22:17 crc kubenswrapper[4897]: I0228 13:22:17.529036 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 28 13:22:17 crc kubenswrapper[4897]: I0228 13:22:17.673565 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 28 13:22:17 crc kubenswrapper[4897]: I0228 13:22:17.747471 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 28 13:22:17 crc kubenswrapper[4897]: I0228 13:22:17.788853 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 28 13:22:17 crc kubenswrapper[4897]: I0228 13:22:17.919722 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 28 13:22:17 crc kubenswrapper[4897]: I0228 13:22:17.922301 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.024241 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.106993 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.127470 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.151928 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.172404 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.308547 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.351447 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.397023 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.400032 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.482590 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.494633 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.534061 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.715990 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.749344 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.894683 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.902514 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.938673 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.949234 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 28 13:22:18 crc kubenswrapper[4897]: I0228 13:22:18.968180 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.018470 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.199380 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.250933 4897 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.283565 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.326877 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.404335 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.426897 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.452059 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.456688 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.518725 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.651038 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.714215 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.772622 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.785642 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.810565 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.845993 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.886290 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 28 13:22:19 crc kubenswrapper[4897]: I0228 13:22:19.930173 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.015253 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.061480 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.136458 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.172801 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.193700 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.246127 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.250557 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.307693 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.341047 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.370825 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.387037 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.479088 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.484783 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.510978 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.550355 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.562959 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.686548 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.833853 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.848698 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 28 13:22:20 crc kubenswrapper[4897]: I0228 13:22:20.921147 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.013100 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.029555 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.037224 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.078261 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.117177 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.183790 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.248024 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.289822 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.334390 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.408304 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.639025 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.736440 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.738027 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.832941 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.833007 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.850608 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 28 13:22:21 crc kubenswrapper[4897]: I0228 13:22:21.972378 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.050845 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.064633 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.075219 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.095533 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.098793 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.151208 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.211773 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.214933 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.268084 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.304670 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.346517 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.377989 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.452088 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.487765 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.577129 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.579420 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.593948 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.596795 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.633938 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.794176 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.906297 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.943911 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.970006 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 28 13:22:22 crc kubenswrapper[4897]: I0228 13:22:22.995881 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.025369 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.065890 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.077604 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.194118 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.310670 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.375516 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.388561 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.421719 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.528045 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.533817 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.569653 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.573256 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.598538 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.629127 4897 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.648513 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.660755 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.691102 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.813463 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.853920 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.856847 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.897341 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.912258 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.939241 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 28 13:22:23 crc kubenswrapper[4897]: I0228 13:22:23.975610 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.038986 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.078264 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.108796 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.184976 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.194022 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.373917 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.404591 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.742489 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.756841 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.767056 4897 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.783408 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.783739 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.806736 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.848053 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.892227 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.934113 4897 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.934798 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.934778947 podStartE2EDuration="41.934778947s" podCreationTimestamp="2026-02-28 13:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:22:05.12318711 +0000 UTC m=+339.365507767" watchObservedRunningTime="2026-02-28 13:22:24.934778947 +0000 UTC m=+359.177099614" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.936028 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.939416 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-84hkx"] Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.939496 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.943812 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 13:22:24 crc kubenswrapper[4897]: I0228 13:22:24.965353 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.965286647 podStartE2EDuration="19.965286647s" podCreationTimestamp="2026-02-28 13:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:22:24.959745619 +0000 UTC m=+359.202066356" watchObservedRunningTime="2026-02-28 13:22:24.965286647 +0000 UTC m=+359.207607354" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.075172 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.075300 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.155762 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.191044 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.237536 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.311403 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.397729 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.403760 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.432086 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.481462 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.503156 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.516760 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.530654 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.575456 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.790366 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.908715 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 28 13:22:25 crc kubenswrapper[4897]: I0228 13:22:25.937711 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.001778 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.090589 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.191656 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.335537 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.342216 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.454785 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.478465 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" path="/var/lib/kubelet/pods/49d0a669-bb05-4da5-9e58-789b58c0797b/volumes" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.508599 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.543883 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.608447 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.658502 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.697680 4897 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.746841 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.751300 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.858770 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.860848 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.890842 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 28 13:22:26 crc kubenswrapper[4897]: I0228 13:22:26.914097 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.110776 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.120612 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.132759 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.195727 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.203210 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.408937 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.424175 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.507548 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.574180 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.654937 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.766222 4897 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.766562 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c" gracePeriod=5 Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.793530 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.804257 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 28 13:22:27 crc kubenswrapper[4897]: I0228 13:22:27.910377 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.087825 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.087975 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.117955 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.339863 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.446111 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.484344 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.487592 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.614348 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.633234 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.636622 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.688115 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.730808 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.756740 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.866851 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 28 13:22:28 crc kubenswrapper[4897]: I0228 13:22:28.983270 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.175466 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.197261 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.217160 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.226162 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.227409 4897 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.294769 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.382110 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.522874 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.533686 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.535532 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.598245 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.818624 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:22:29 crc kubenswrapper[4897]: I0228 13:22:29.963670 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 13:22:30 crc kubenswrapper[4897]: I0228 13:22:30.048418 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 28 13:22:30 crc kubenswrapper[4897]: I0228 13:22:30.372362 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 28 13:22:30 crc kubenswrapper[4897]: I0228 13:22:30.656487 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 28 13:22:30 crc kubenswrapper[4897]: I0228 13:22:30.834085 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 28 13:22:30 crc kubenswrapper[4897]: I0228 13:22:30.850747 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.025435 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.524830 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.654942 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538082-lxmsk"] Feb 28 13:22:31 crc kubenswrapper[4897]: E0228 13:22:31.655208 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.655223 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 13:22:31 crc kubenswrapper[4897]: E0228 13:22:31.655237 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" containerName="oauth-openshift" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.655245 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" containerName="oauth-openshift" Feb 28 13:22:31 crc kubenswrapper[4897]: E0228 13:22:31.655257 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce17ac4-0687-4628-beb9-963332095590" containerName="installer" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.655266 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce17ac4-0687-4628-beb9-963332095590" containerName="installer" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.655434 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.655448 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d0a669-bb05-4da5-9e58-789b58c0797b" containerName="oauth-openshift" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.655467 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce17ac4-0687-4628-beb9-963332095590" containerName="installer" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.655906 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538082-lxmsk" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.657997 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.658699 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.659645 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.669259 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538082-lxmsk"] Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.814335 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffqzw\" (UniqueName: \"kubernetes.io/projected/a3c2c910-1291-4a68-9fb2-85055cd61b9f-kube-api-access-ffqzw\") pod \"auto-csr-approver-29538082-lxmsk\" (UID: \"a3c2c910-1291-4a68-9fb2-85055cd61b9f\") " pod="openshift-infra/auto-csr-approver-29538082-lxmsk" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.915863 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffqzw\" (UniqueName: \"kubernetes.io/projected/a3c2c910-1291-4a68-9fb2-85055cd61b9f-kube-api-access-ffqzw\") pod \"auto-csr-approver-29538082-lxmsk\" (UID: \"a3c2c910-1291-4a68-9fb2-85055cd61b9f\") " pod="openshift-infra/auto-csr-approver-29538082-lxmsk" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.950602 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffqzw\" (UniqueName: \"kubernetes.io/projected/a3c2c910-1291-4a68-9fb2-85055cd61b9f-kube-api-access-ffqzw\") pod \"auto-csr-approver-29538082-lxmsk\" (UID: \"a3c2c910-1291-4a68-9fb2-85055cd61b9f\") " pod="openshift-infra/auto-csr-approver-29538082-lxmsk" Feb 28 13:22:31 crc kubenswrapper[4897]: I0228 13:22:31.988609 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538082-lxmsk" Feb 28 13:22:32 crc kubenswrapper[4897]: I0228 13:22:32.474777 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538082-lxmsk"] Feb 28 13:22:32 crc kubenswrapper[4897]: W0228 13:22:32.480992 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3c2c910_1291_4a68_9fb2_85055cd61b9f.slice/crio-9ac6ad2e8c3c020073c0b07076713df7551118e5be558dface4c21a6dd06b2da WatchSource:0}: Error finding container 9ac6ad2e8c3c020073c0b07076713df7551118e5be558dface4c21a6dd06b2da: Status 404 returned error can't find the container with id 9ac6ad2e8c3c020073c0b07076713df7551118e5be558dface4c21a6dd06b2da Feb 28 13:22:32 crc kubenswrapper[4897]: I0228 13:22:32.522872 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538082-lxmsk" event={"ID":"a3c2c910-1291-4a68-9fb2-85055cd61b9f","Type":"ContainerStarted","Data":"9ac6ad2e8c3c020073c0b07076713df7551118e5be558dface4c21a6dd06b2da"} Feb 28 13:22:32 crc kubenswrapper[4897]: I0228 13:22:32.905729 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.384971 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.385376 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.533761 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.533855 4897 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c" exitCode=137 Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.533938 4897 scope.go:117] "RemoveContainer" containerID="8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.533942 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539054 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539107 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539131 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539165 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539239 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539150 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539175 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539241 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.539388 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.540609 4897 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.540659 4897 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.540710 4897 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.540730 4897 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.552919 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.558076 4897 scope.go:117] "RemoveContainer" containerID="8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c" Feb 28 13:22:33 crc kubenswrapper[4897]: E0228 13:22:33.558714 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c\": container with ID starting with 8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c not found: ID does not exist" containerID="8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.558789 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c"} err="failed to get container status \"8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c\": rpc error: code = NotFound desc = could not find container \"8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c\": container with ID starting with 8669e956489905e0865a3b7d2d4369dcd62f02c3a617f49df1c47934fe1c803c not found: ID does not exist" Feb 28 13:22:33 crc kubenswrapper[4897]: I0228 13:22:33.642136 4897 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:34 crc kubenswrapper[4897]: I0228 13:22:34.469054 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 28 13:22:34 crc kubenswrapper[4897]: I0228 13:22:34.469991 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 28 13:22:34 crc kubenswrapper[4897]: I0228 13:22:34.488696 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 13:22:34 crc kubenswrapper[4897]: I0228 13:22:34.488756 4897 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d816f036-1a8c-45c0-918e-7765f2bd3950" Feb 28 13:22:34 crc kubenswrapper[4897]: I0228 13:22:34.497646 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 13:22:34 crc kubenswrapper[4897]: I0228 13:22:34.497706 4897 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d816f036-1a8c-45c0-918e-7765f2bd3950" Feb 28 13:22:34 crc kubenswrapper[4897]: I0228 13:22:34.543664 4897 generic.go:334] "Generic (PLEG): container finished" podID="a3c2c910-1291-4a68-9fb2-85055cd61b9f" containerID="e596477865a8cbb823e491318c64006a0e6362865e601e66cf338c01e46f7613" exitCode=0 Feb 28 13:22:34 crc kubenswrapper[4897]: I0228 13:22:34.543738 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538082-lxmsk" event={"ID":"a3c2c910-1291-4a68-9fb2-85055cd61b9f","Type":"ContainerDied","Data":"e596477865a8cbb823e491318c64006a0e6362865e601e66cf338c01e46f7613"} Feb 28 13:22:35 crc kubenswrapper[4897]: I0228 13:22:35.907438 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538082-lxmsk" Feb 28 13:22:35 crc kubenswrapper[4897]: I0228 13:22:35.974214 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffqzw\" (UniqueName: \"kubernetes.io/projected/a3c2c910-1291-4a68-9fb2-85055cd61b9f-kube-api-access-ffqzw\") pod \"a3c2c910-1291-4a68-9fb2-85055cd61b9f\" (UID: \"a3c2c910-1291-4a68-9fb2-85055cd61b9f\") " Feb 28 13:22:35 crc kubenswrapper[4897]: I0228 13:22:35.982477 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c2c910-1291-4a68-9fb2-85055cd61b9f-kube-api-access-ffqzw" (OuterVolumeSpecName: "kube-api-access-ffqzw") pod "a3c2c910-1291-4a68-9fb2-85055cd61b9f" (UID: "a3c2c910-1291-4a68-9fb2-85055cd61b9f"). InnerVolumeSpecName "kube-api-access-ffqzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.075790 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffqzw\" (UniqueName: \"kubernetes.io/projected/a3c2c910-1291-4a68-9fb2-85055cd61b9f-kube-api-access-ffqzw\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.397124 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76cf98b679-p49s5"] Feb 28 13:22:36 crc kubenswrapper[4897]: E0228 13:22:36.397451 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c2c910-1291-4a68-9fb2-85055cd61b9f" containerName="oc" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.397471 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c2c910-1291-4a68-9fb2-85055cd61b9f" containerName="oc" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.397686 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c2c910-1291-4a68-9fb2-85055cd61b9f" containerName="oc" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.398259 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.407469 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.409397 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.409742 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.410082 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.410376 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.411019 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.411802 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.411868 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.411881 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.413113 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.413421 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.413745 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.421185 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76cf98b679-p49s5"] Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.426984 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.433137 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.442197 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.480865 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-session\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.480928 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.480966 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481006 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481083 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-error\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481122 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481190 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481244 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-service-ca\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481283 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-audit-policies\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481333 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-login\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481397 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-router-certs\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481445 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-audit-dir\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481476 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.481506 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9pn\" (UniqueName: \"kubernetes.io/projected/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-kube-api-access-gk9pn\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.559921 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538082-lxmsk" event={"ID":"a3c2c910-1291-4a68-9fb2-85055cd61b9f","Type":"ContainerDied","Data":"9ac6ad2e8c3c020073c0b07076713df7551118e5be558dface4c21a6dd06b2da"} Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.559973 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac6ad2e8c3c020073c0b07076713df7551118e5be558dface4c21a6dd06b2da" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.559987 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538082-lxmsk" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.582842 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-error\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.583409 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.583691 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.583943 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-service-ca\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.584188 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-audit-policies\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.584462 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-login\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.584717 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-router-certs\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.584964 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-service-ca\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.585266 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-audit-dir\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.585284 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-audit-policies\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.585585 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.585373 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-audit-dir\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.585547 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.586464 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9pn\" (UniqueName: \"kubernetes.io/projected/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-kube-api-access-gk9pn\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.586767 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-session\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.587030 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.587563 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.587842 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.588285 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.589525 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.590012 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-error\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.590392 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-template-login\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.590750 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.591670 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-session\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.591971 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-router-certs\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.593092 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.595501 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.631417 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9pn\" (UniqueName: \"kubernetes.io/projected/8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935-kube-api-access-gk9pn\") pod \"oauth-openshift-76cf98b679-p49s5\" (UID: \"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935\") " pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:36 crc kubenswrapper[4897]: I0228 13:22:36.755065 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:37 crc kubenswrapper[4897]: I0228 13:22:37.201657 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76cf98b679-p49s5"] Feb 28 13:22:37 crc kubenswrapper[4897]: W0228 13:22:37.210526 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a87a1ef_71e0_4aa4_9dd4_5c7d726fc935.slice/crio-2e081fecd8c95352dc2eed7689addac7b007b6bc57ce31234b2120a96689c414 WatchSource:0}: Error finding container 2e081fecd8c95352dc2eed7689addac7b007b6bc57ce31234b2120a96689c414: Status 404 returned error can't find the container with id 2e081fecd8c95352dc2eed7689addac7b007b6bc57ce31234b2120a96689c414 Feb 28 13:22:37 crc kubenswrapper[4897]: I0228 13:22:37.573456 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" event={"ID":"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935","Type":"ContainerStarted","Data":"3162e2771e471f73fdde317431c3b695d6d1379fac4ce91535516e426cdf5651"} Feb 28 13:22:37 crc kubenswrapper[4897]: I0228 13:22:37.573515 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" event={"ID":"8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935","Type":"ContainerStarted","Data":"2e081fecd8c95352dc2eed7689addac7b007b6bc57ce31234b2120a96689c414"} Feb 28 13:22:37 crc kubenswrapper[4897]: I0228 13:22:37.573991 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:37 crc kubenswrapper[4897]: I0228 13:22:37.579191 4897 patch_prober.go:28] interesting pod/oauth-openshift-76cf98b679-p49s5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": dial tcp 10.217.0.66:6443: connect: connection refused" start-of-body= Feb 28 13:22:37 crc kubenswrapper[4897]: I0228 13:22:37.579283 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" podUID="8a87a1ef-71e0-4aa4-9dd4-5c7d726fc935" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": dial tcp 10.217.0.66:6443: connect: connection refused" Feb 28 13:22:37 crc kubenswrapper[4897]: I0228 13:22:37.609893 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" podStartSLOduration=74.609870433 podStartE2EDuration="1m14.609870433s" podCreationTimestamp="2026-02-28 13:21:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:22:37.606837186 +0000 UTC m=+371.849157853" watchObservedRunningTime="2026-02-28 13:22:37.609870433 +0000 UTC m=+371.852191120" Feb 28 13:22:38 crc kubenswrapper[4897]: I0228 13:22:38.587179 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76cf98b679-p49s5" Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.465277 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c74d587d6-rxmx2"] Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.465842 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" podUID="3ef129d7-7447-49a2-91f2-ce21f4195a5e" containerName="controller-manager" containerID="cri-o://3f814b4e437ffcf509b869e5b317e8895d2acacd3857f525e95d83316c6b6325" gracePeriod=30 Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.562468 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd"] Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.562703 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" podUID="9cddab55-a16b-4a91-a241-354b72f176b9" containerName="route-controller-manager" containerID="cri-o://edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72" gracePeriod=30 Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.637748 4897 generic.go:334] "Generic (PLEG): container finished" podID="3ef129d7-7447-49a2-91f2-ce21f4195a5e" containerID="3f814b4e437ffcf509b869e5b317e8895d2acacd3857f525e95d83316c6b6325" exitCode=0 Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.637800 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" event={"ID":"3ef129d7-7447-49a2-91f2-ce21f4195a5e","Type":"ContainerDied","Data":"3f814b4e437ffcf509b869e5b317e8895d2acacd3857f525e95d83316c6b6325"} Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.818209 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.890640 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.898259 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44p55\" (UniqueName: \"kubernetes.io/projected/3ef129d7-7447-49a2-91f2-ce21f4195a5e-kube-api-access-44p55\") pod \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.898601 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-client-ca\") pod \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.898805 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-proxy-ca-bundles\") pod \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.898851 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef129d7-7447-49a2-91f2-ce21f4195a5e-serving-cert\") pod \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.898904 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-config\") pod \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\" (UID: \"3ef129d7-7447-49a2-91f2-ce21f4195a5e\") " Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.899211 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-client-ca" (OuterVolumeSpecName: "client-ca") pod "3ef129d7-7447-49a2-91f2-ce21f4195a5e" (UID: "3ef129d7-7447-49a2-91f2-ce21f4195a5e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.899575 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3ef129d7-7447-49a2-91f2-ce21f4195a5e" (UID: "3ef129d7-7447-49a2-91f2-ce21f4195a5e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.899810 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-config" (OuterVolumeSpecName: "config") pod "3ef129d7-7447-49a2-91f2-ce21f4195a5e" (UID: "3ef129d7-7447-49a2-91f2-ce21f4195a5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.909550 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef129d7-7447-49a2-91f2-ce21f4195a5e-kube-api-access-44p55" (OuterVolumeSpecName: "kube-api-access-44p55") pod "3ef129d7-7447-49a2-91f2-ce21f4195a5e" (UID: "3ef129d7-7447-49a2-91f2-ce21f4195a5e"). InnerVolumeSpecName "kube-api-access-44p55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:22:44 crc kubenswrapper[4897]: I0228 13:22:44.911645 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef129d7-7447-49a2-91f2-ce21f4195a5e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3ef129d7-7447-49a2-91f2-ce21f4195a5e" (UID: "3ef129d7-7447-49a2-91f2-ce21f4195a5e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000518 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-config\") pod \"9cddab55-a16b-4a91-a241-354b72f176b9\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000575 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cddab55-a16b-4a91-a241-354b72f176b9-serving-cert\") pod \"9cddab55-a16b-4a91-a241-354b72f176b9\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000610 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wldvm\" (UniqueName: \"kubernetes.io/projected/9cddab55-a16b-4a91-a241-354b72f176b9-kube-api-access-wldvm\") pod \"9cddab55-a16b-4a91-a241-354b72f176b9\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000637 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-client-ca\") pod \"9cddab55-a16b-4a91-a241-354b72f176b9\" (UID: \"9cddab55-a16b-4a91-a241-354b72f176b9\") " Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000805 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44p55\" (UniqueName: \"kubernetes.io/projected/3ef129d7-7447-49a2-91f2-ce21f4195a5e-kube-api-access-44p55\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000816 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000824 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000840 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef129d7-7447-49a2-91f2-ce21f4195a5e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.000848 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef129d7-7447-49a2-91f2-ce21f4195a5e-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.001856 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-client-ca" (OuterVolumeSpecName: "client-ca") pod "9cddab55-a16b-4a91-a241-354b72f176b9" (UID: "9cddab55-a16b-4a91-a241-354b72f176b9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.001999 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-config" (OuterVolumeSpecName: "config") pod "9cddab55-a16b-4a91-a241-354b72f176b9" (UID: "9cddab55-a16b-4a91-a241-354b72f176b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.003412 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cddab55-a16b-4a91-a241-354b72f176b9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9cddab55-a16b-4a91-a241-354b72f176b9" (UID: "9cddab55-a16b-4a91-a241-354b72f176b9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.004550 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cddab55-a16b-4a91-a241-354b72f176b9-kube-api-access-wldvm" (OuterVolumeSpecName: "kube-api-access-wldvm") pod "9cddab55-a16b-4a91-a241-354b72f176b9" (UID: "9cddab55-a16b-4a91-a241-354b72f176b9"). InnerVolumeSpecName "kube-api-access-wldvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.101943 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.101986 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cddab55-a16b-4a91-a241-354b72f176b9-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.102003 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cddab55-a16b-4a91-a241-354b72f176b9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.102020 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wldvm\" (UniqueName: \"kubernetes.io/projected/9cddab55-a16b-4a91-a241-354b72f176b9-kube-api-access-wldvm\") on node \"crc\" DevicePath \"\"" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.649473 4897 generic.go:334] "Generic (PLEG): container finished" podID="9cddab55-a16b-4a91-a241-354b72f176b9" containerID="edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72" exitCode=0 Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.649563 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" event={"ID":"9cddab55-a16b-4a91-a241-354b72f176b9","Type":"ContainerDied","Data":"edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72"} Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.649978 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" event={"ID":"9cddab55-a16b-4a91-a241-354b72f176b9","Type":"ContainerDied","Data":"37bc6a0128ec6a54d8112022eebde68ea30b32c88356eadc45c383ed58938b4a"} Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.649591 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.650005 4897 scope.go:117] "RemoveContainer" containerID="edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.651848 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" event={"ID":"3ef129d7-7447-49a2-91f2-ce21f4195a5e","Type":"ContainerDied","Data":"3d4a2738f03c5c33b8662640b60732ca69b33a88d828e46f2c2ae15afe3a5ac1"} Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.651973 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c74d587d6-rxmx2" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.683968 4897 scope.go:117] "RemoveContainer" containerID="edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72" Feb 28 13:22:45 crc kubenswrapper[4897]: E0228 13:22:45.684636 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72\": container with ID starting with edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72 not found: ID does not exist" containerID="edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.684670 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72"} err="failed to get container status \"edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72\": rpc error: code = NotFound desc = could not find container \"edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72\": container with ID starting with edfad73337ce8289428d28234080a0322d95e5ce6d279d0f63eecc9225443b72 not found: ID does not exist" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.684696 4897 scope.go:117] "RemoveContainer" containerID="3f814b4e437ffcf509b869e5b317e8895d2acacd3857f525e95d83316c6b6325" Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.699948 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c74d587d6-rxmx2"] Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.710652 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c74d587d6-rxmx2"] Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.716248 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd"] Feb 28 13:22:45 crc kubenswrapper[4897]: I0228 13:22:45.725154 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f577f78fd-kl8dd"] Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.400191 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-674db64bb4-8bv5v"] Feb 28 13:22:46 crc kubenswrapper[4897]: E0228 13:22:46.400434 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef129d7-7447-49a2-91f2-ce21f4195a5e" containerName="controller-manager" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.400452 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef129d7-7447-49a2-91f2-ce21f4195a5e" containerName="controller-manager" Feb 28 13:22:46 crc kubenswrapper[4897]: E0228 13:22:46.400466 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cddab55-a16b-4a91-a241-354b72f176b9" containerName="route-controller-manager" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.400474 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cddab55-a16b-4a91-a241-354b72f176b9" containerName="route-controller-manager" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.400559 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef129d7-7447-49a2-91f2-ce21f4195a5e" containerName="controller-manager" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.400572 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cddab55-a16b-4a91-a241-354b72f176b9" containerName="route-controller-manager" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.400918 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.405494 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.405820 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.406132 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.406574 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.406773 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.406981 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.410579 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8"] Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.412023 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.414642 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.414872 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.415088 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.415099 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.415642 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.417508 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.417929 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.434900 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8"] Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.470250 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef129d7-7447-49a2-91f2-ce21f4195a5e" path="/var/lib/kubelet/pods/3ef129d7-7447-49a2-91f2-ce21f4195a5e/volumes" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.471156 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cddab55-a16b-4a91-a241-354b72f176b9" path="/var/lib/kubelet/pods/9cddab55-a16b-4a91-a241-354b72f176b9/volumes" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.493029 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-674db64bb4-8bv5v"] Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.529569 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec905783-50db-440a-95e0-97aa7e68dc83-serving-cert\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.529626 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-config\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.529652 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-serving-cert\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.529674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-config\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.529894 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh64s\" (UniqueName: \"kubernetes.io/projected/ec905783-50db-440a-95e0-97aa7e68dc83-kube-api-access-kh64s\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.529924 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-proxy-ca-bundles\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.529962 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzp8g\" (UniqueName: \"kubernetes.io/projected/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-kube-api-access-fzp8g\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.530026 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-client-ca\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.530072 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-client-ca\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631068 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec905783-50db-440a-95e0-97aa7e68dc83-serving-cert\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631141 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-config\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631183 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-serving-cert\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631217 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-config\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631267 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh64s\" (UniqueName: \"kubernetes.io/projected/ec905783-50db-440a-95e0-97aa7e68dc83-kube-api-access-kh64s\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631330 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-proxy-ca-bundles\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631409 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzp8g\" (UniqueName: \"kubernetes.io/projected/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-kube-api-access-fzp8g\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631482 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-client-ca\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.631536 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-client-ca\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.633669 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-config\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.633798 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-client-ca\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.634024 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-proxy-ca-bundles\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.636036 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-config\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.639086 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-serving-cert\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.640208 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-client-ca\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.643436 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec905783-50db-440a-95e0-97aa7e68dc83-serving-cert\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.668924 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzp8g\" (UniqueName: \"kubernetes.io/projected/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-kube-api-access-fzp8g\") pod \"controller-manager-674db64bb4-8bv5v\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.670889 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh64s\" (UniqueName: \"kubernetes.io/projected/ec905783-50db-440a-95e0-97aa7e68dc83-kube-api-access-kh64s\") pod \"route-controller-manager-88589bd7b-5gxf8\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.744300 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:46 crc kubenswrapper[4897]: I0228 13:22:46.755832 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.019668 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-674db64bb4-8bv5v"] Feb 28 13:22:47 crc kubenswrapper[4897]: W0228 13:22:47.045587 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1961d75_92e4_4b2d_b1ee_f0a553ed037c.slice/crio-93a807596dfb8401f4d7652f74f66946f8486a8e5600a3ae57eade1d9c9b0b09 WatchSource:0}: Error finding container 93a807596dfb8401f4d7652f74f66946f8486a8e5600a3ae57eade1d9c9b0b09: Status 404 returned error can't find the container with id 93a807596dfb8401f4d7652f74f66946f8486a8e5600a3ae57eade1d9c9b0b09 Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.086260 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8"] Feb 28 13:22:47 crc kubenswrapper[4897]: W0228 13:22:47.095093 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec905783_50db_440a_95e0_97aa7e68dc83.slice/crio-5e553379da971308b5c7b505cefae21acaf02c2618055837de5f564531e2e9c1 WatchSource:0}: Error finding container 5e553379da971308b5c7b505cefae21acaf02c2618055837de5f564531e2e9c1: Status 404 returned error can't find the container with id 5e553379da971308b5c7b505cefae21acaf02c2618055837de5f564531e2e9c1 Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.676334 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" event={"ID":"ec905783-50db-440a-95e0-97aa7e68dc83","Type":"ContainerStarted","Data":"d771784c4e3bd7dd84a350df61b9a4472c63f88776b83386c19219cd3422df73"} Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.677509 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" event={"ID":"ec905783-50db-440a-95e0-97aa7e68dc83","Type":"ContainerStarted","Data":"5e553379da971308b5c7b505cefae21acaf02c2618055837de5f564531e2e9c1"} Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.677599 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.680799 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" event={"ID":"d1961d75-92e4-4b2d-b1ee-f0a553ed037c","Type":"ContainerStarted","Data":"255cd1802860ae90698df05cc481c9ae31131baa774fa2ac864a5ab1cc3659ff"} Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.680910 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" event={"ID":"d1961d75-92e4-4b2d-b1ee-f0a553ed037c","Type":"ContainerStarted","Data":"93a807596dfb8401f4d7652f74f66946f8486a8e5600a3ae57eade1d9c9b0b09"} Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.681879 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.691499 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.709382 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" podStartSLOduration=3.709272788 podStartE2EDuration="3.709272788s" podCreationTimestamp="2026-02-28 13:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:22:47.697583672 +0000 UTC m=+381.939904329" watchObservedRunningTime="2026-02-28 13:22:47.709272788 +0000 UTC m=+381.951593525" Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.730333 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" podStartSLOduration=3.730283253 podStartE2EDuration="3.730283253s" podCreationTimestamp="2026-02-28 13:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:22:47.726169861 +0000 UTC m=+381.968490588" watchObservedRunningTime="2026-02-28 13:22:47.730283253 +0000 UTC m=+381.972603910" Feb 28 13:22:47 crc kubenswrapper[4897]: I0228 13:22:47.995427 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:22:54 crc kubenswrapper[4897]: I0228 13:22:54.221289 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l7m8v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 28 13:22:54 crc kubenswrapper[4897]: I0228 13:22:54.221987 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 28 13:22:54 crc kubenswrapper[4897]: I0228 13:22:54.221358 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l7m8v container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 28 13:22:54 crc kubenswrapper[4897]: I0228 13:22:54.222086 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 28 13:22:54 crc kubenswrapper[4897]: I0228 13:22:54.735995 4897 generic.go:334] "Generic (PLEG): container finished" podID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerID="6ac1f66fd5757dd43cee9118f91a051f5e21d550eacaf70a93ae6067aaab7569" exitCode=0 Feb 28 13:22:54 crc kubenswrapper[4897]: I0228 13:22:54.736064 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" event={"ID":"53e254f6-444a-4fd6-8bda-5af18b9d347c","Type":"ContainerDied","Data":"6ac1f66fd5757dd43cee9118f91a051f5e21d550eacaf70a93ae6067aaab7569"} Feb 28 13:22:54 crc kubenswrapper[4897]: I0228 13:22:54.736510 4897 scope.go:117] "RemoveContainer" containerID="6ac1f66fd5757dd43cee9118f91a051f5e21d550eacaf70a93ae6067aaab7569" Feb 28 13:22:55 crc kubenswrapper[4897]: I0228 13:22:55.745829 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" event={"ID":"53e254f6-444a-4fd6-8bda-5af18b9d347c","Type":"ContainerStarted","Data":"4994dd60d143fcfaa264c8c734e55e4436e67e3a3cbc0385b08f9b693918f7dd"} Feb 28 13:22:55 crc kubenswrapper[4897]: I0228 13:22:55.746573 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:22:55 crc kubenswrapper[4897]: I0228 13:22:55.751807 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:23:04 crc kubenswrapper[4897]: I0228 13:23:04.494688 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-674db64bb4-8bv5v"] Feb 28 13:23:04 crc kubenswrapper[4897]: I0228 13:23:04.495370 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" podUID="d1961d75-92e4-4b2d-b1ee-f0a553ed037c" containerName="controller-manager" containerID="cri-o://255cd1802860ae90698df05cc481c9ae31131baa774fa2ac864a5ab1cc3659ff" gracePeriod=30 Feb 28 13:23:04 crc kubenswrapper[4897]: I0228 13:23:04.529789 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8"] Feb 28 13:23:04 crc kubenswrapper[4897]: I0228 13:23:04.530007 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" podUID="ec905783-50db-440a-95e0-97aa7e68dc83" containerName="route-controller-manager" containerID="cri-o://d771784c4e3bd7dd84a350df61b9a4472c63f88776b83386c19219cd3422df73" gracePeriod=30 Feb 28 13:23:04 crc kubenswrapper[4897]: I0228 13:23:04.799284 4897 generic.go:334] "Generic (PLEG): container finished" podID="d1961d75-92e4-4b2d-b1ee-f0a553ed037c" containerID="255cd1802860ae90698df05cc481c9ae31131baa774fa2ac864a5ab1cc3659ff" exitCode=0 Feb 28 13:23:04 crc kubenswrapper[4897]: I0228 13:23:04.799360 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" event={"ID":"d1961d75-92e4-4b2d-b1ee-f0a553ed037c","Type":"ContainerDied","Data":"255cd1802860ae90698df05cc481c9ae31131baa774fa2ac864a5ab1cc3659ff"} Feb 28 13:23:04 crc kubenswrapper[4897]: I0228 13:23:04.801426 4897 generic.go:334] "Generic (PLEG): container finished" podID="ec905783-50db-440a-95e0-97aa7e68dc83" containerID="d771784c4e3bd7dd84a350df61b9a4472c63f88776b83386c19219cd3422df73" exitCode=0 Feb 28 13:23:04 crc kubenswrapper[4897]: I0228 13:23:04.801474 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" event={"ID":"ec905783-50db-440a-95e0-97aa7e68dc83","Type":"ContainerDied","Data":"d771784c4e3bd7dd84a350df61b9a4472c63f88776b83386c19219cd3422df73"} Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.023999 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.096587 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.129079 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh64s\" (UniqueName: \"kubernetes.io/projected/ec905783-50db-440a-95e0-97aa7e68dc83-kube-api-access-kh64s\") pod \"ec905783-50db-440a-95e0-97aa7e68dc83\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.129118 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec905783-50db-440a-95e0-97aa7e68dc83-serving-cert\") pod \"ec905783-50db-440a-95e0-97aa7e68dc83\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.129150 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-config\") pod \"ec905783-50db-440a-95e0-97aa7e68dc83\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.129261 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-client-ca\") pod \"ec905783-50db-440a-95e0-97aa7e68dc83\" (UID: \"ec905783-50db-440a-95e0-97aa7e68dc83\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.130052 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec905783-50db-440a-95e0-97aa7e68dc83" (UID: "ec905783-50db-440a-95e0-97aa7e68dc83"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.148751 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-config" (OuterVolumeSpecName: "config") pod "ec905783-50db-440a-95e0-97aa7e68dc83" (UID: "ec905783-50db-440a-95e0-97aa7e68dc83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.151013 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec905783-50db-440a-95e0-97aa7e68dc83-kube-api-access-kh64s" (OuterVolumeSpecName: "kube-api-access-kh64s") pod "ec905783-50db-440a-95e0-97aa7e68dc83" (UID: "ec905783-50db-440a-95e0-97aa7e68dc83"). InnerVolumeSpecName "kube-api-access-kh64s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.151225 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec905783-50db-440a-95e0-97aa7e68dc83-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec905783-50db-440a-95e0-97aa7e68dc83" (UID: "ec905783-50db-440a-95e0-97aa7e68dc83"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.230723 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-proxy-ca-bundles\") pod \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.230790 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-config\") pod \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.230827 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-serving-cert\") pod \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.230895 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzp8g\" (UniqueName: \"kubernetes.io/projected/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-kube-api-access-fzp8g\") pod \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.230949 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-client-ca\") pod \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\" (UID: \"d1961d75-92e4-4b2d-b1ee-f0a553ed037c\") " Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.231511 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d1961d75-92e4-4b2d-b1ee-f0a553ed037c" (UID: "d1961d75-92e4-4b2d-b1ee-f0a553ed037c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.231635 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-config" (OuterVolumeSpecName: "config") pod "d1961d75-92e4-4b2d-b1ee-f0a553ed037c" (UID: "d1961d75-92e4-4b2d-b1ee-f0a553ed037c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.231699 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.231783 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh64s\" (UniqueName: \"kubernetes.io/projected/ec905783-50db-440a-95e0-97aa7e68dc83-kube-api-access-kh64s\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.231797 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec905783-50db-440a-95e0-97aa7e68dc83-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.231809 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec905783-50db-440a-95e0-97aa7e68dc83-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.231821 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.232177 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-client-ca" (OuterVolumeSpecName: "client-ca") pod "d1961d75-92e4-4b2d-b1ee-f0a553ed037c" (UID: "d1961d75-92e4-4b2d-b1ee-f0a553ed037c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.234450 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-kube-api-access-fzp8g" (OuterVolumeSpecName: "kube-api-access-fzp8g") pod "d1961d75-92e4-4b2d-b1ee-f0a553ed037c" (UID: "d1961d75-92e4-4b2d-b1ee-f0a553ed037c"). InnerVolumeSpecName "kube-api-access-fzp8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.235598 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d1961d75-92e4-4b2d-b1ee-f0a553ed037c" (UID: "d1961d75-92e4-4b2d-b1ee-f0a553ed037c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.333654 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.333709 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.333727 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.333749 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzp8g\" (UniqueName: \"kubernetes.io/projected/d1961d75-92e4-4b2d-b1ee-f0a553ed037c-kube-api-access-fzp8g\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.811894 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" event={"ID":"ec905783-50db-440a-95e0-97aa7e68dc83","Type":"ContainerDied","Data":"5e553379da971308b5c7b505cefae21acaf02c2618055837de5f564531e2e9c1"} Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.811984 4897 scope.go:117] "RemoveContainer" containerID="d771784c4e3bd7dd84a350df61b9a4472c63f88776b83386c19219cd3422df73" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.812013 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.814809 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" event={"ID":"d1961d75-92e4-4b2d-b1ee-f0a553ed037c","Type":"ContainerDied","Data":"93a807596dfb8401f4d7652f74f66946f8486a8e5600a3ae57eade1d9c9b0b09"} Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.815062 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674db64bb4-8bv5v" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.842263 4897 scope.go:117] "RemoveContainer" containerID="255cd1802860ae90698df05cc481c9ae31131baa774fa2ac864a5ab1cc3659ff" Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.868997 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-674db64bb4-8bv5v"] Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.878098 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-674db64bb4-8bv5v"] Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.882760 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8"] Feb 28 13:23:05 crc kubenswrapper[4897]: I0228 13:23:05.886414 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88589bd7b-5gxf8"] Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.414946 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-k8xvz"] Feb 28 13:23:06 crc kubenswrapper[4897]: E0228 13:23:06.415164 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec905783-50db-440a-95e0-97aa7e68dc83" containerName="route-controller-manager" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.415177 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec905783-50db-440a-95e0-97aa7e68dc83" containerName="route-controller-manager" Feb 28 13:23:06 crc kubenswrapper[4897]: E0228 13:23:06.415192 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1961d75-92e4-4b2d-b1ee-f0a553ed037c" containerName="controller-manager" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.415198 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1961d75-92e4-4b2d-b1ee-f0a553ed037c" containerName="controller-manager" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.415283 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec905783-50db-440a-95e0-97aa7e68dc83" containerName="route-controller-manager" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.415292 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1961d75-92e4-4b2d-b1ee-f0a553ed037c" containerName="controller-manager" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.415657 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.426545 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.426701 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.426915 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.426924 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.427163 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.427191 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.429537 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh"] Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.430177 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.432949 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.438704 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.438956 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.439124 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.439231 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.439342 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.439498 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.448615 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-k8xvz"] Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.484276 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1961d75-92e4-4b2d-b1ee-f0a553ed037c" path="/var/lib/kubelet/pods/d1961d75-92e4-4b2d-b1ee-f0a553ed037c/volumes" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.485692 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec905783-50db-440a-95e0-97aa7e68dc83" path="/var/lib/kubelet/pods/ec905783-50db-440a-95e0-97aa7e68dc83/volumes" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.486631 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh"] Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.549818 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-client-ca\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.549882 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-client-ca\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.549923 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99tw\" (UniqueName: \"kubernetes.io/projected/f32e8d5e-a618-4b46-b728-927c78d1c0fd-kube-api-access-d99tw\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.549959 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-config\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.550029 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32e8d5e-a618-4b46-b728-927c78d1c0fd-serving-cert\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.550095 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-serving-cert\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.550123 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnnt\" (UniqueName: \"kubernetes.io/projected/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-kube-api-access-lfnnt\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.550154 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-config\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.550210 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-proxy-ca-bundles\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.651249 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfnnt\" (UniqueName: \"kubernetes.io/projected/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-kube-api-access-lfnnt\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.651834 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-config\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.651887 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-serving-cert\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.651968 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-proxy-ca-bundles\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.652108 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-client-ca\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.652167 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-client-ca\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.652209 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99tw\" (UniqueName: \"kubernetes.io/projected/f32e8d5e-a618-4b46-b728-927c78d1c0fd-kube-api-access-d99tw\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.652259 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-config\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.652398 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32e8d5e-a618-4b46-b728-927c78d1c0fd-serving-cert\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.654464 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-config\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.654520 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-client-ca\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.655932 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-config\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.656901 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-client-ca\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.659181 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-serving-cert\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.659285 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-proxy-ca-bundles\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.659806 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32e8d5e-a618-4b46-b728-927c78d1c0fd-serving-cert\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.686648 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfnnt\" (UniqueName: \"kubernetes.io/projected/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-kube-api-access-lfnnt\") pod \"route-controller-manager-7d6744ffd5-fq2kh\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.688820 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99tw\" (UniqueName: \"kubernetes.io/projected/f32e8d5e-a618-4b46-b728-927c78d1c0fd-kube-api-access-d99tw\") pod \"controller-manager-d676b65b6-k8xvz\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.783753 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:06 crc kubenswrapper[4897]: I0228 13:23:06.790269 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.073522 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh"] Feb 28 13:23:07 crc kubenswrapper[4897]: W0228 13:23:07.082695 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2129620_9f0b_4ea9_88fb_1ecb6de8b0d2.slice/crio-34e56ccbd876a1e634586bfd8382272df9560eaad8928893857885d85db47af9 WatchSource:0}: Error finding container 34e56ccbd876a1e634586bfd8382272df9560eaad8928893857885d85db47af9: Status 404 returned error can't find the container with id 34e56ccbd876a1e634586bfd8382272df9560eaad8928893857885d85db47af9 Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.252712 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-k8xvz"] Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.844180 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" event={"ID":"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2","Type":"ContainerStarted","Data":"d8aea067a298029d01006089cec66936945da645148232a7fafab90fe55a25b5"} Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.844885 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.844994 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" event={"ID":"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2","Type":"ContainerStarted","Data":"34e56ccbd876a1e634586bfd8382272df9560eaad8928893857885d85db47af9"} Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.846173 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" event={"ID":"f32e8d5e-a618-4b46-b728-927c78d1c0fd","Type":"ContainerStarted","Data":"566eae52e2b371f64024dd9d8fcda02b12afa3c6397dff2bf035e56a8e1684b5"} Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.846219 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" event={"ID":"f32e8d5e-a618-4b46-b728-927c78d1c0fd","Type":"ContainerStarted","Data":"c3edeaf43bcd6e5fada7af80b68bdd1cfd9183a45365aa141d6aaafad46b138b"} Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.852518 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.871701 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" podStartSLOduration=3.871687356 podStartE2EDuration="3.871687356s" podCreationTimestamp="2026-02-28 13:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:23:07.870160992 +0000 UTC m=+402.112481649" watchObservedRunningTime="2026-02-28 13:23:07.871687356 +0000 UTC m=+402.114008013" Feb 28 13:23:07 crc kubenswrapper[4897]: I0228 13:23:07.895813 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" podStartSLOduration=3.895795468 podStartE2EDuration="3.895795468s" podCreationTimestamp="2026-02-28 13:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:23:07.894543551 +0000 UTC m=+402.136864208" watchObservedRunningTime="2026-02-28 13:23:07.895795468 +0000 UTC m=+402.138116135" Feb 28 13:23:08 crc kubenswrapper[4897]: I0228 13:23:08.854154 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:08 crc kubenswrapper[4897]: I0228 13:23:08.860163 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:24 crc kubenswrapper[4897]: I0228 13:23:24.485753 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-k8xvz"] Feb 28 13:23:24 crc kubenswrapper[4897]: I0228 13:23:24.486594 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" podUID="f32e8d5e-a618-4b46-b728-927c78d1c0fd" containerName="controller-manager" containerID="cri-o://566eae52e2b371f64024dd9d8fcda02b12afa3c6397dff2bf035e56a8e1684b5" gracePeriod=30 Feb 28 13:23:24 crc kubenswrapper[4897]: I0228 13:23:24.582931 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh"] Feb 28 13:23:24 crc kubenswrapper[4897]: I0228 13:23:24.583164 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" podUID="f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" containerName="route-controller-manager" containerID="cri-o://d8aea067a298029d01006089cec66936945da645148232a7fafab90fe55a25b5" gracePeriod=30 Feb 28 13:23:24 crc kubenswrapper[4897]: I0228 13:23:24.967534 4897 generic.go:334] "Generic (PLEG): container finished" podID="f32e8d5e-a618-4b46-b728-927c78d1c0fd" containerID="566eae52e2b371f64024dd9d8fcda02b12afa3c6397dff2bf035e56a8e1684b5" exitCode=0 Feb 28 13:23:24 crc kubenswrapper[4897]: I0228 13:23:24.967634 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" event={"ID":"f32e8d5e-a618-4b46-b728-927c78d1c0fd","Type":"ContainerDied","Data":"566eae52e2b371f64024dd9d8fcda02b12afa3c6397dff2bf035e56a8e1684b5"} Feb 28 13:23:24 crc kubenswrapper[4897]: I0228 13:23:24.969701 4897 generic.go:334] "Generic (PLEG): container finished" podID="f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" containerID="d8aea067a298029d01006089cec66936945da645148232a7fafab90fe55a25b5" exitCode=0 Feb 28 13:23:24 crc kubenswrapper[4897]: I0228 13:23:24.969765 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" event={"ID":"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2","Type":"ContainerDied","Data":"d8aea067a298029d01006089cec66936945da645148232a7fafab90fe55a25b5"} Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.051896 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.063278 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.136674 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d99tw\" (UniqueName: \"kubernetes.io/projected/f32e8d5e-a618-4b46-b728-927c78d1c0fd-kube-api-access-d99tw\") pod \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.136762 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfnnt\" (UniqueName: \"kubernetes.io/projected/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-kube-api-access-lfnnt\") pod \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.136807 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-config\") pod \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.136887 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32e8d5e-a618-4b46-b728-927c78d1c0fd-serving-cert\") pod \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.136959 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-config\") pod \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.137033 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-serving-cert\") pod \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.137073 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-client-ca\") pod \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\" (UID: \"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.137111 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-client-ca\") pod \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.137153 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-proxy-ca-bundles\") pod \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\" (UID: \"f32e8d5e-a618-4b46-b728-927c78d1c0fd\") " Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.139413 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-config" (OuterVolumeSpecName: "config") pod "f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" (UID: "f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.139509 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f32e8d5e-a618-4b46-b728-927c78d1c0fd" (UID: "f32e8d5e-a618-4b46-b728-927c78d1c0fd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.139542 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "f32e8d5e-a618-4b46-b728-927c78d1c0fd" (UID: "f32e8d5e-a618-4b46-b728-927c78d1c0fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.139546 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" (UID: "f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.139641 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-config" (OuterVolumeSpecName: "config") pod "f32e8d5e-a618-4b46-b728-927c78d1c0fd" (UID: "f32e8d5e-a618-4b46-b728-927c78d1c0fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.145516 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32e8d5e-a618-4b46-b728-927c78d1c0fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f32e8d5e-a618-4b46-b728-927c78d1c0fd" (UID: "f32e8d5e-a618-4b46-b728-927c78d1c0fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.145541 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32e8d5e-a618-4b46-b728-927c78d1c0fd-kube-api-access-d99tw" (OuterVolumeSpecName: "kube-api-access-d99tw") pod "f32e8d5e-a618-4b46-b728-927c78d1c0fd" (UID: "f32e8d5e-a618-4b46-b728-927c78d1c0fd"). InnerVolumeSpecName "kube-api-access-d99tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.150755 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-kube-api-access-lfnnt" (OuterVolumeSpecName: "kube-api-access-lfnnt") pod "f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" (UID: "f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2"). InnerVolumeSpecName "kube-api-access-lfnnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.160335 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" (UID: "f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.238931 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.238972 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.238984 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.238996 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.239006 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f32e8d5e-a618-4b46-b728-927c78d1c0fd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.239021 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d99tw\" (UniqueName: \"kubernetes.io/projected/f32e8d5e-a618-4b46-b728-927c78d1c0fd-kube-api-access-d99tw\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.239033 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfnnt\" (UniqueName: \"kubernetes.io/projected/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-kube-api-access-lfnnt\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.239043 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.239053 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f32e8d5e-a618-4b46-b728-927c78d1c0fd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.980749 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" event={"ID":"f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2","Type":"ContainerDied","Data":"34e56ccbd876a1e634586bfd8382272df9560eaad8928893857885d85db47af9"} Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.980793 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.981176 4897 scope.go:117] "RemoveContainer" containerID="d8aea067a298029d01006089cec66936945da645148232a7fafab90fe55a25b5" Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.985748 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" event={"ID":"f32e8d5e-a618-4b46-b728-927c78d1c0fd","Type":"ContainerDied","Data":"c3edeaf43bcd6e5fada7af80b68bdd1cfd9183a45365aa141d6aaafad46b138b"} Feb 28 13:23:25 crc kubenswrapper[4897]: I0228 13:23:25.986262 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d676b65b6-k8xvz" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.015843 4897 scope.go:117] "RemoveContainer" containerID="566eae52e2b371f64024dd9d8fcda02b12afa3c6397dff2bf035e56a8e1684b5" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.031411 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh"] Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.040663 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-fq2kh"] Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.057046 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-k8xvz"] Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.065392 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-k8xvz"] Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.431948 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw"] Feb 28 13:23:26 crc kubenswrapper[4897]: E0228 13:23:26.432160 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32e8d5e-a618-4b46-b728-927c78d1c0fd" containerName="controller-manager" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.432173 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32e8d5e-a618-4b46-b728-927c78d1c0fd" containerName="controller-manager" Feb 28 13:23:26 crc kubenswrapper[4897]: E0228 13:23:26.432183 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" containerName="route-controller-manager" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.432189 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" containerName="route-controller-manager" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.432287 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32e8d5e-a618-4b46-b728-927c78d1c0fd" containerName="controller-manager" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.432300 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" containerName="route-controller-manager" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.432687 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.437172 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.437281 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.437400 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.437585 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.438430 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.449119 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b"] Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.449914 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw"] Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.450010 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.455064 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b"] Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.455518 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.456935 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.457110 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.463466 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.463936 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.464172 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.470084 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.470379 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.493283 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2" path="/var/lib/kubelet/pods/f2129620-9f0b-4ea9-88fb-1ecb6de8b0d2/volumes" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.494455 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f32e8d5e-a618-4b46-b728-927c78d1c0fd" path="/var/lib/kubelet/pods/f32e8d5e-a618-4b46-b728-927c78d1c0fd/volumes" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.559416 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d44d51a5-f87d-4a84-9498-18cbf157eda1-serving-cert\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.559608 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-config\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.559702 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-client-ca\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.559742 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6xs\" (UniqueName: \"kubernetes.io/projected/1d538b55-5aed-4f75-979f-d6b38fcf9de9-kube-api-access-sl6xs\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.559781 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-config\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.559872 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d538b55-5aed-4f75-979f-d6b38fcf9de9-serving-cert\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.559947 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-proxy-ca-bundles\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.560031 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hclhf\" (UniqueName: \"kubernetes.io/projected/d44d51a5-f87d-4a84-9498-18cbf157eda1-kube-api-access-hclhf\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.560114 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-client-ca\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.661366 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-config\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.661554 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-client-ca\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.663352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl6xs\" (UniqueName: \"kubernetes.io/projected/1d538b55-5aed-4f75-979f-d6b38fcf9de9-kube-api-access-sl6xs\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.663641 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-config\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.665246 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-config\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.665429 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d538b55-5aed-4f75-979f-d6b38fcf9de9-serving-cert\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.665474 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-proxy-ca-bundles\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.665776 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-config\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.666722 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-client-ca\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.666984 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-proxy-ca-bundles\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.667235 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hclhf\" (UniqueName: \"kubernetes.io/projected/d44d51a5-f87d-4a84-9498-18cbf157eda1-kube-api-access-hclhf\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.667272 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-client-ca\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.668162 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-client-ca\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.668338 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d44d51a5-f87d-4a84-9498-18cbf157eda1-serving-cert\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.673931 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d44d51a5-f87d-4a84-9498-18cbf157eda1-serving-cert\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.677918 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d538b55-5aed-4f75-979f-d6b38fcf9de9-serving-cert\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.682085 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl6xs\" (UniqueName: \"kubernetes.io/projected/1d538b55-5aed-4f75-979f-d6b38fcf9de9-kube-api-access-sl6xs\") pod \"controller-manager-59f6bf4b6f-jwc9b\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.702977 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hclhf\" (UniqueName: \"kubernetes.io/projected/d44d51a5-f87d-4a84-9498-18cbf157eda1-kube-api-access-hclhf\") pod \"route-controller-manager-6b5c86bf7-4qssw\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.799595 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:26 crc kubenswrapper[4897]: I0228 13:23:26.811688 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:27 crc kubenswrapper[4897]: I0228 13:23:27.300163 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b"] Feb 28 13:23:27 crc kubenswrapper[4897]: I0228 13:23:27.304228 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw"] Feb 28 13:23:27 crc kubenswrapper[4897]: W0228 13:23:27.316810 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd44d51a5_f87d_4a84_9498_18cbf157eda1.slice/crio-6f84bfdef465c8bc27386dd065727ff8d204c155817e6ecd87c2cf11c8fa6c95 WatchSource:0}: Error finding container 6f84bfdef465c8bc27386dd065727ff8d204c155817e6ecd87c2cf11c8fa6c95: Status 404 returned error can't find the container with id 6f84bfdef465c8bc27386dd065727ff8d204c155817e6ecd87c2cf11c8fa6c95 Feb 28 13:23:27 crc kubenswrapper[4897]: W0228 13:23:27.319684 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d538b55_5aed_4f75_979f_d6b38fcf9de9.slice/crio-7c11745c7af372e3b8e40883e5fda4f9b1f2079b80c8d677b69f8a0fcbef1567 WatchSource:0}: Error finding container 7c11745c7af372e3b8e40883e5fda4f9b1f2079b80c8d677b69f8a0fcbef1567: Status 404 returned error can't find the container with id 7c11745c7af372e3b8e40883e5fda4f9b1f2079b80c8d677b69f8a0fcbef1567 Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.018294 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" event={"ID":"d44d51a5-f87d-4a84-9498-18cbf157eda1","Type":"ContainerStarted","Data":"f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647"} Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.018651 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" event={"ID":"d44d51a5-f87d-4a84-9498-18cbf157eda1","Type":"ContainerStarted","Data":"6f84bfdef465c8bc27386dd065727ff8d204c155817e6ecd87c2cf11c8fa6c95"} Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.018870 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.026292 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" event={"ID":"1d538b55-5aed-4f75-979f-d6b38fcf9de9","Type":"ContainerStarted","Data":"763e3a33da1319dbbf9da781a5e977f2e9c5846f5aefcf0f56e850011c727d7b"} Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.026371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" event={"ID":"1d538b55-5aed-4f75-979f-d6b38fcf9de9","Type":"ContainerStarted","Data":"7c11745c7af372e3b8e40883e5fda4f9b1f2079b80c8d677b69f8a0fcbef1567"} Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.027126 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.034201 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.042154 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" podStartSLOduration=4.042139563 podStartE2EDuration="4.042139563s" podCreationTimestamp="2026-02-28 13:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:23:28.041797603 +0000 UTC m=+422.284118290" watchObservedRunningTime="2026-02-28 13:23:28.042139563 +0000 UTC m=+422.284460260" Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.059853 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" podStartSLOduration=4.059827768 podStartE2EDuration="4.059827768s" podCreationTimestamp="2026-02-28 13:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:23:28.058678605 +0000 UTC m=+422.300999302" watchObservedRunningTime="2026-02-28 13:23:28.059827768 +0000 UTC m=+422.302148435" Feb 28 13:23:28 crc kubenswrapper[4897]: I0228 13:23:28.139541 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:33 crc kubenswrapper[4897]: I0228 13:23:33.371377 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:23:33 crc kubenswrapper[4897]: I0228 13:23:33.372358 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:23:44 crc kubenswrapper[4897]: I0228 13:23:44.489816 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b"] Feb 28 13:23:44 crc kubenswrapper[4897]: I0228 13:23:44.490692 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" podUID="1d538b55-5aed-4f75-979f-d6b38fcf9de9" containerName="controller-manager" containerID="cri-o://763e3a33da1319dbbf9da781a5e977f2e9c5846f5aefcf0f56e850011c727d7b" gracePeriod=30 Feb 28 13:23:44 crc kubenswrapper[4897]: I0228 13:23:44.509146 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw"] Feb 28 13:23:44 crc kubenswrapper[4897]: I0228 13:23:44.509404 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" podUID="d44d51a5-f87d-4a84-9498-18cbf157eda1" containerName="route-controller-manager" containerID="cri-o://f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647" gracePeriod=30 Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.030225 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.125759 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d44d51a5-f87d-4a84-9498-18cbf157eda1-serving-cert\") pod \"d44d51a5-f87d-4a84-9498-18cbf157eda1\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.126131 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hclhf\" (UniqueName: \"kubernetes.io/projected/d44d51a5-f87d-4a84-9498-18cbf157eda1-kube-api-access-hclhf\") pod \"d44d51a5-f87d-4a84-9498-18cbf157eda1\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.126174 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-client-ca\") pod \"d44d51a5-f87d-4a84-9498-18cbf157eda1\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.126226 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-config\") pod \"d44d51a5-f87d-4a84-9498-18cbf157eda1\" (UID: \"d44d51a5-f87d-4a84-9498-18cbf157eda1\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.127396 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-client-ca" (OuterVolumeSpecName: "client-ca") pod "d44d51a5-f87d-4a84-9498-18cbf157eda1" (UID: "d44d51a5-f87d-4a84-9498-18cbf157eda1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.127544 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-config" (OuterVolumeSpecName: "config") pod "d44d51a5-f87d-4a84-9498-18cbf157eda1" (UID: "d44d51a5-f87d-4a84-9498-18cbf157eda1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.127758 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.127798 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d44d51a5-f87d-4a84-9498-18cbf157eda1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.136851 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44d51a5-f87d-4a84-9498-18cbf157eda1-kube-api-access-hclhf" (OuterVolumeSpecName: "kube-api-access-hclhf") pod "d44d51a5-f87d-4a84-9498-18cbf157eda1" (UID: "d44d51a5-f87d-4a84-9498-18cbf157eda1"). InnerVolumeSpecName "kube-api-access-hclhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.138249 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d44d51a5-f87d-4a84-9498-18cbf157eda1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d44d51a5-f87d-4a84-9498-18cbf157eda1" (UID: "d44d51a5-f87d-4a84-9498-18cbf157eda1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.147221 4897 generic.go:334] "Generic (PLEG): container finished" podID="d44d51a5-f87d-4a84-9498-18cbf157eda1" containerID="f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647" exitCode=0 Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.147286 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.147402 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" event={"ID":"d44d51a5-f87d-4a84-9498-18cbf157eda1","Type":"ContainerDied","Data":"f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647"} Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.147572 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw" event={"ID":"d44d51a5-f87d-4a84-9498-18cbf157eda1","Type":"ContainerDied","Data":"6f84bfdef465c8bc27386dd065727ff8d204c155817e6ecd87c2cf11c8fa6c95"} Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.147637 4897 scope.go:117] "RemoveContainer" containerID="f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.150657 4897 generic.go:334] "Generic (PLEG): container finished" podID="1d538b55-5aed-4f75-979f-d6b38fcf9de9" containerID="763e3a33da1319dbbf9da781a5e977f2e9c5846f5aefcf0f56e850011c727d7b" exitCode=0 Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.150716 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" event={"ID":"1d538b55-5aed-4f75-979f-d6b38fcf9de9","Type":"ContainerDied","Data":"763e3a33da1319dbbf9da781a5e977f2e9c5846f5aefcf0f56e850011c727d7b"} Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.183869 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.189476 4897 scope.go:117] "RemoveContainer" containerID="f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647" Feb 28 13:23:45 crc kubenswrapper[4897]: E0228 13:23:45.195207 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647\": container with ID starting with f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647 not found: ID does not exist" containerID="f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.195279 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647"} err="failed to get container status \"f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647\": rpc error: code = NotFound desc = could not find container \"f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647\": container with ID starting with f37c144958403893159349dd7d6d177207d3226936b3ba1e0b39ea6408cbc647 not found: ID does not exist" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.198010 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw"] Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.202152 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5c86bf7-4qssw"] Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.265544 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d44d51a5-f87d-4a84-9498-18cbf157eda1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.265585 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hclhf\" (UniqueName: \"kubernetes.io/projected/d44d51a5-f87d-4a84-9498-18cbf157eda1-kube-api-access-hclhf\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.366403 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-proxy-ca-bundles\") pod \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.366503 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl6xs\" (UniqueName: \"kubernetes.io/projected/1d538b55-5aed-4f75-979f-d6b38fcf9de9-kube-api-access-sl6xs\") pod \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.366561 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-client-ca\") pod \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.366609 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-config\") pod \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.366672 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d538b55-5aed-4f75-979f-d6b38fcf9de9-serving-cert\") pod \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\" (UID: \"1d538b55-5aed-4f75-979f-d6b38fcf9de9\") " Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.367250 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d538b55-5aed-4f75-979f-d6b38fcf9de9" (UID: "1d538b55-5aed-4f75-979f-d6b38fcf9de9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.367295 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1d538b55-5aed-4f75-979f-d6b38fcf9de9" (UID: "1d538b55-5aed-4f75-979f-d6b38fcf9de9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.367436 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-config" (OuterVolumeSpecName: "config") pod "1d538b55-5aed-4f75-979f-d6b38fcf9de9" (UID: "1d538b55-5aed-4f75-979f-d6b38fcf9de9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.369900 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d538b55-5aed-4f75-979f-d6b38fcf9de9-kube-api-access-sl6xs" (OuterVolumeSpecName: "kube-api-access-sl6xs") pod "1d538b55-5aed-4f75-979f-d6b38fcf9de9" (UID: "1d538b55-5aed-4f75-979f-d6b38fcf9de9"). InnerVolumeSpecName "kube-api-access-sl6xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.371380 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d538b55-5aed-4f75-979f-d6b38fcf9de9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d538b55-5aed-4f75-979f-d6b38fcf9de9" (UID: "1d538b55-5aed-4f75-979f-d6b38fcf9de9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.467774 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.467828 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl6xs\" (UniqueName: \"kubernetes.io/projected/1d538b55-5aed-4f75-979f-d6b38fcf9de9-kube-api-access-sl6xs\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.467839 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.467849 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d538b55-5aed-4f75-979f-d6b38fcf9de9-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:45 crc kubenswrapper[4897]: I0228 13:23:45.467857 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d538b55-5aed-4f75-979f-d6b38fcf9de9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.160358 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" event={"ID":"1d538b55-5aed-4f75-979f-d6b38fcf9de9","Type":"ContainerDied","Data":"7c11745c7af372e3b8e40883e5fda4f9b1f2079b80c8d677b69f8a0fcbef1567"} Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.160444 4897 scope.go:117] "RemoveContainer" containerID="763e3a33da1319dbbf9da781a5e977f2e9c5846f5aefcf0f56e850011c727d7b" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.160461 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.214365 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b"] Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.221421 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59f6bf4b6f-jwc9b"] Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.444438 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-69kd4"] Feb 28 13:23:46 crc kubenswrapper[4897]: E0228 13:23:46.444748 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44d51a5-f87d-4a84-9498-18cbf157eda1" containerName="route-controller-manager" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.444762 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44d51a5-f87d-4a84-9498-18cbf157eda1" containerName="route-controller-manager" Feb 28 13:23:46 crc kubenswrapper[4897]: E0228 13:23:46.444780 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d538b55-5aed-4f75-979f-d6b38fcf9de9" containerName="controller-manager" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.444786 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d538b55-5aed-4f75-979f-d6b38fcf9de9" containerName="controller-manager" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.444879 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44d51a5-f87d-4a84-9498-18cbf157eda1" containerName="route-controller-manager" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.444888 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d538b55-5aed-4f75-979f-d6b38fcf9de9" containerName="controller-manager" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.445337 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.447331 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.447523 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.447554 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.448555 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.449090 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.449263 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.456902 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.472771 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d538b55-5aed-4f75-979f-d6b38fcf9de9" path="/var/lib/kubelet/pods/1d538b55-5aed-4f75-979f-d6b38fcf9de9/volumes" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.473374 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44d51a5-f87d-4a84-9498-18cbf157eda1" path="/var/lib/kubelet/pods/d44d51a5-f87d-4a84-9498-18cbf157eda1/volumes" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.473990 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4"] Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.474590 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4"] Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.474608 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-69kd4"] Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.474678 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.477026 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.477124 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.477124 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.477267 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.477320 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.479035 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.480825 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/963e4590-a792-4bbb-a941-bbf8da8d3870-serving-cert\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.480878 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdkqc\" (UniqueName: \"kubernetes.io/projected/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-kube-api-access-tdkqc\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.480943 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-proxy-ca-bundles\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.480992 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7lv\" (UniqueName: \"kubernetes.io/projected/963e4590-a792-4bbb-a941-bbf8da8d3870-kube-api-access-jj7lv\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.481035 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/963e4590-a792-4bbb-a941-bbf8da8d3870-config\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.481064 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/963e4590-a792-4bbb-a941-bbf8da8d3870-client-ca\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.481091 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-config\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.481110 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-serving-cert\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.481132 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-client-ca\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.581772 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/963e4590-a792-4bbb-a941-bbf8da8d3870-serving-cert\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.581846 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdkqc\" (UniqueName: \"kubernetes.io/projected/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-kube-api-access-tdkqc\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.582050 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-proxy-ca-bundles\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.582087 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj7lv\" (UniqueName: \"kubernetes.io/projected/963e4590-a792-4bbb-a941-bbf8da8d3870-kube-api-access-jj7lv\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.582122 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/963e4590-a792-4bbb-a941-bbf8da8d3870-config\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.582147 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/963e4590-a792-4bbb-a941-bbf8da8d3870-client-ca\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.582172 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-config\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.582192 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-serving-cert\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.582214 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-client-ca\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.583139 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-client-ca\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.583598 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/963e4590-a792-4bbb-a941-bbf8da8d3870-config\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.583619 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-proxy-ca-bundles\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.583867 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/963e4590-a792-4bbb-a941-bbf8da8d3870-client-ca\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.586666 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/963e4590-a792-4bbb-a941-bbf8da8d3870-serving-cert\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.587725 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-serving-cert\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.596678 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-config\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.598671 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj7lv\" (UniqueName: \"kubernetes.io/projected/963e4590-a792-4bbb-a941-bbf8da8d3870-kube-api-access-jj7lv\") pod \"route-controller-manager-7d6744ffd5-mf7k4\" (UID: \"963e4590-a792-4bbb-a941-bbf8da8d3870\") " pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.599576 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdkqc\" (UniqueName: \"kubernetes.io/projected/80ac7e3c-60d5-4dfb-9c92-0ec969adc267-kube-api-access-tdkqc\") pod \"controller-manager-d676b65b6-69kd4\" (UID: \"80ac7e3c-60d5-4dfb-9c92-0ec969adc267\") " pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.805410 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:46 crc kubenswrapper[4897]: I0228 13:23:46.815892 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:47 crc kubenswrapper[4897]: I0228 13:23:47.010884 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d676b65b6-69kd4"] Feb 28 13:23:47 crc kubenswrapper[4897]: I0228 13:23:47.173759 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" event={"ID":"80ac7e3c-60d5-4dfb-9c92-0ec969adc267","Type":"ContainerStarted","Data":"4329f0477a38da25756189b1e923b6ae4b043cbb8990797f5de94f2eac05aa2f"} Feb 28 13:23:47 crc kubenswrapper[4897]: I0228 13:23:47.306686 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4"] Feb 28 13:23:47 crc kubenswrapper[4897]: W0228 13:23:47.309194 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod963e4590_a792_4bbb_a941_bbf8da8d3870.slice/crio-9fc1d87cc8a191b36fdc0d284d880b153337b7c60110b720b4e2bd1bf907a37b WatchSource:0}: Error finding container 9fc1d87cc8a191b36fdc0d284d880b153337b7c60110b720b4e2bd1bf907a37b: Status 404 returned error can't find the container with id 9fc1d87cc8a191b36fdc0d284d880b153337b7c60110b720b4e2bd1bf907a37b Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.185106 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" event={"ID":"80ac7e3c-60d5-4dfb-9c92-0ec969adc267","Type":"ContainerStarted","Data":"c8b6449a58f25e48d433b432d6952fc1072e172a2bb53d117315bbff71974999"} Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.185509 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.188539 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" event={"ID":"963e4590-a792-4bbb-a941-bbf8da8d3870","Type":"ContainerStarted","Data":"05f83eb86b751ec5cc9de1c7f11ea66718b2987418ff6bcc06b6e7de37cf54eb"} Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.188831 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.189084 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" event={"ID":"963e4590-a792-4bbb-a941-bbf8da8d3870","Type":"ContainerStarted","Data":"9fc1d87cc8a191b36fdc0d284d880b153337b7c60110b720b4e2bd1bf907a37b"} Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.195644 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.198641 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.224402 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d676b65b6-69kd4" podStartSLOduration=4.224377364 podStartE2EDuration="4.224377364s" podCreationTimestamp="2026-02-28 13:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:23:48.223117277 +0000 UTC m=+442.465437974" watchObservedRunningTime="2026-02-28 13:23:48.224377364 +0000 UTC m=+442.466698061" Feb 28 13:23:48 crc kubenswrapper[4897]: I0228 13:23:48.252434 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d6744ffd5-mf7k4" podStartSLOduration=4.25241276 podStartE2EDuration="4.25241276s" podCreationTimestamp="2026-02-28 13:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:23:48.249189696 +0000 UTC m=+442.491510433" watchObservedRunningTime="2026-02-28 13:23:48.25241276 +0000 UTC m=+442.494733427" Feb 28 13:23:50 crc kubenswrapper[4897]: I0228 13:23:50.830919 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sv5dr"] Feb 28 13:23:50 crc kubenswrapper[4897]: I0228 13:23:50.831662 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sv5dr" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerName="registry-server" containerID="cri-o://e47307b7d312832ba3be229fcca49d16ab0eed8540702ef986fb2e62b72aff0d" gracePeriod=2 Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.211980 4897 generic.go:334] "Generic (PLEG): container finished" podID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerID="e47307b7d312832ba3be229fcca49d16ab0eed8540702ef986fb2e62b72aff0d" exitCode=0 Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.212043 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv5dr" event={"ID":"c8e82c23-54f4-43a4-904b-4f90348580ac","Type":"ContainerDied","Data":"e47307b7d312832ba3be229fcca49d16ab0eed8540702ef986fb2e62b72aff0d"} Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.373335 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.435822 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-catalog-content\") pod \"c8e82c23-54f4-43a4-904b-4f90348580ac\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.532861 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8e82c23-54f4-43a4-904b-4f90348580ac" (UID: "c8e82c23-54f4-43a4-904b-4f90348580ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.537081 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbsfd\" (UniqueName: \"kubernetes.io/projected/c8e82c23-54f4-43a4-904b-4f90348580ac-kube-api-access-xbsfd\") pod \"c8e82c23-54f4-43a4-904b-4f90348580ac\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.537257 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-utilities\") pod \"c8e82c23-54f4-43a4-904b-4f90348580ac\" (UID: \"c8e82c23-54f4-43a4-904b-4f90348580ac\") " Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.537826 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.538171 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-utilities" (OuterVolumeSpecName: "utilities") pod "c8e82c23-54f4-43a4-904b-4f90348580ac" (UID: "c8e82c23-54f4-43a4-904b-4f90348580ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.547093 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e82c23-54f4-43a4-904b-4f90348580ac-kube-api-access-xbsfd" (OuterVolumeSpecName: "kube-api-access-xbsfd") pod "c8e82c23-54f4-43a4-904b-4f90348580ac" (UID: "c8e82c23-54f4-43a4-904b-4f90348580ac"). InnerVolumeSpecName "kube-api-access-xbsfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.639156 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e82c23-54f4-43a4-904b-4f90348580ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:51 crc kubenswrapper[4897]: I0228 13:23:51.639230 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbsfd\" (UniqueName: \"kubernetes.io/projected/c8e82c23-54f4-43a4-904b-4f90348580ac-kube-api-access-xbsfd\") on node \"crc\" DevicePath \"\"" Feb 28 13:23:52 crc kubenswrapper[4897]: I0228 13:23:52.238036 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv5dr" event={"ID":"c8e82c23-54f4-43a4-904b-4f90348580ac","Type":"ContainerDied","Data":"d656bd125eb0d4bbf50098d41dc9aee50fb27e6402c781ca71c6616742bdc399"} Feb 28 13:23:52 crc kubenswrapper[4897]: I0228 13:23:52.238122 4897 scope.go:117] "RemoveContainer" containerID="e47307b7d312832ba3be229fcca49d16ab0eed8540702ef986fb2e62b72aff0d" Feb 28 13:23:52 crc kubenswrapper[4897]: I0228 13:23:52.238167 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sv5dr" Feb 28 13:23:52 crc kubenswrapper[4897]: I0228 13:23:52.262629 4897 scope.go:117] "RemoveContainer" containerID="d1d5d17426d9b6bd37c8e5c70181b7917955da21d4eeffe81d8ab9ed62f04a8f" Feb 28 13:23:52 crc kubenswrapper[4897]: I0228 13:23:52.295803 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sv5dr"] Feb 28 13:23:52 crc kubenswrapper[4897]: I0228 13:23:52.297783 4897 scope.go:117] "RemoveContainer" containerID="36640f3ae8151a492ade0fe822ab1701188b3f336300cbaa3d7c76efa95fc78c" Feb 28 13:23:52 crc kubenswrapper[4897]: I0228 13:23:52.300117 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sv5dr"] Feb 28 13:23:52 crc kubenswrapper[4897]: I0228 13:23:52.465418 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" path="/var/lib/kubelet/pods/c8e82c23-54f4-43a4-904b-4f90348580ac/volumes" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.748123 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-w2ltv"] Feb 28 13:23:59 crc kubenswrapper[4897]: E0228 13:23:59.749185 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerName="extract-content" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.749207 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerName="extract-content" Feb 28 13:23:59 crc kubenswrapper[4897]: E0228 13:23:59.749226 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerName="extract-utilities" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.749238 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerName="extract-utilities" Feb 28 13:23:59 crc kubenswrapper[4897]: E0228 13:23:59.749259 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerName="registry-server" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.749271 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerName="registry-server" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.749485 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e82c23-54f4-43a4-904b-4f90348580ac" containerName="registry-server" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.750158 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.756514 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-w2ltv"] Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.951221 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/59897aed-7b07-4125-a8c7-39c11036b61b-registry-certificates\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.951286 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-registry-tls\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.951386 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/59897aed-7b07-4125-a8c7-39c11036b61b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.951432 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/59897aed-7b07-4125-a8c7-39c11036b61b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.951471 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59897aed-7b07-4125-a8c7-39c11036b61b-trusted-ca\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.951514 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-bound-sa-token\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.951581 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.951613 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gspx5\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-kube-api-access-gspx5\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:23:59 crc kubenswrapper[4897]: I0228 13:23:59.988197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.053185 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-registry-tls\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.053258 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/59897aed-7b07-4125-a8c7-39c11036b61b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.053294 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/59897aed-7b07-4125-a8c7-39c11036b61b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.053364 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59897aed-7b07-4125-a8c7-39c11036b61b-trusted-ca\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.053424 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-bound-sa-token\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.053491 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gspx5\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-kube-api-access-gspx5\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.053555 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/59897aed-7b07-4125-a8c7-39c11036b61b-registry-certificates\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.055600 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/59897aed-7b07-4125-a8c7-39c11036b61b-registry-certificates\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.056590 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59897aed-7b07-4125-a8c7-39c11036b61b-trusted-ca\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.057122 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/59897aed-7b07-4125-a8c7-39c11036b61b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.064882 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/59897aed-7b07-4125-a8c7-39c11036b61b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.066248 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-registry-tls\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.090922 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-bound-sa-token\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.098701 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gspx5\" (UniqueName: \"kubernetes.io/projected/59897aed-7b07-4125-a8c7-39c11036b61b-kube-api-access-gspx5\") pod \"image-registry-66df7c8f76-w2ltv\" (UID: \"59897aed-7b07-4125-a8c7-39c11036b61b\") " pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.108375 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.147848 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538084-bdglw"] Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.148837 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538084-bdglw" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.151864 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.152401 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.152472 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.161632 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538084-bdglw"] Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.255888 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cx99\" (UniqueName: \"kubernetes.io/projected/96fa1520-75f6-47bb-bb62-92efc314da9c-kube-api-access-8cx99\") pod \"auto-csr-approver-29538084-bdglw\" (UID: \"96fa1520-75f6-47bb-bb62-92efc314da9c\") " pod="openshift-infra/auto-csr-approver-29538084-bdglw" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.357440 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cx99\" (UniqueName: \"kubernetes.io/projected/96fa1520-75f6-47bb-bb62-92efc314da9c-kube-api-access-8cx99\") pod \"auto-csr-approver-29538084-bdglw\" (UID: \"96fa1520-75f6-47bb-bb62-92efc314da9c\") " pod="openshift-infra/auto-csr-approver-29538084-bdglw" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.386737 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cx99\" (UniqueName: \"kubernetes.io/projected/96fa1520-75f6-47bb-bb62-92efc314da9c-kube-api-access-8cx99\") pod \"auto-csr-approver-29538084-bdglw\" (UID: \"96fa1520-75f6-47bb-bb62-92efc314da9c\") " pod="openshift-infra/auto-csr-approver-29538084-bdglw" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.496265 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538084-bdglw" Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.566490 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-w2ltv"] Feb 28 13:24:00 crc kubenswrapper[4897]: W0228 13:24:00.575496 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59897aed_7b07_4125_a8c7_39c11036b61b.slice/crio-367b2f04b2e6ab6dbd4f00c2e6e6b3e022f4fdf0964a8dcd7b71426a1a38aba4 WatchSource:0}: Error finding container 367b2f04b2e6ab6dbd4f00c2e6e6b3e022f4fdf0964a8dcd7b71426a1a38aba4: Status 404 returned error can't find the container with id 367b2f04b2e6ab6dbd4f00c2e6e6b3e022f4fdf0964a8dcd7b71426a1a38aba4 Feb 28 13:24:00 crc kubenswrapper[4897]: I0228 13:24:00.978560 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538084-bdglw"] Feb 28 13:24:01 crc kubenswrapper[4897]: I0228 13:24:01.293071 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" event={"ID":"59897aed-7b07-4125-a8c7-39c11036b61b","Type":"ContainerStarted","Data":"539b2a04c57898e935fc24239e696c1182c0f9ca706684e38f32d17ba1d10adc"} Feb 28 13:24:01 crc kubenswrapper[4897]: I0228 13:24:01.293398 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" event={"ID":"59897aed-7b07-4125-a8c7-39c11036b61b","Type":"ContainerStarted","Data":"367b2f04b2e6ab6dbd4f00c2e6e6b3e022f4fdf0964a8dcd7b71426a1a38aba4"} Feb 28 13:24:01 crc kubenswrapper[4897]: I0228 13:24:01.293625 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:01 crc kubenswrapper[4897]: I0228 13:24:01.294213 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538084-bdglw" event={"ID":"96fa1520-75f6-47bb-bb62-92efc314da9c","Type":"ContainerStarted","Data":"a514b150aa222b182dd52ac36af8bb6e51c3518b7742d94ed0c5568edfcd67f5"} Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.091835 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" podStartSLOduration=4.091817046 podStartE2EDuration="4.091817046s" podCreationTimestamp="2026-02-28 13:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:24:01.321763622 +0000 UTC m=+455.564084289" watchObservedRunningTime="2026-02-28 13:24:03.091817046 +0000 UTC m=+457.334137713" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.098052 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q9d2n"] Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.098582 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q9d2n" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="registry-server" containerID="cri-o://ce4cbc9b3c4faed6042191200756ad2b2cc18b61d8f9e03d09067535b16a9a92" gracePeriod=30 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.118703 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfpj4"] Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.131972 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l7m8v"] Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.132254 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" containerID="cri-o://4994dd60d143fcfaa264c8c734e55e4436e67e3a3cbc0385b08f9b693918f7dd" gracePeriod=30 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.143220 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b4nxz"] Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.143952 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.151161 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4slc"] Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.154577 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4slc" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="registry-server" containerID="cri-o://5755f85ed2d7001e68d6c24610ce541ef32a2cb42accd6d50993cddf43d4b1b8" gracePeriod=30 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.177858 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b4nxz"] Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.194019 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wj92z"] Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.305000 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j626w\" (UniqueName: \"kubernetes.io/projected/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-kube-api-access-j626w\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.305047 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.305067 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.316936 4897 generic.go:334] "Generic (PLEG): container finished" podID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerID="ce4cbc9b3c4faed6042191200756ad2b2cc18b61d8f9e03d09067535b16a9a92" exitCode=0 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.317097 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9d2n" event={"ID":"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5","Type":"ContainerDied","Data":"ce4cbc9b3c4faed6042191200756ad2b2cc18b61d8f9e03d09067535b16a9a92"} Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.320043 4897 generic.go:334] "Generic (PLEG): container finished" podID="96fa1520-75f6-47bb-bb62-92efc314da9c" containerID="f546b1f3c469568c2025454375130e8ce54e4baee9391f9123cca8b844a5aa9f" exitCode=0 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.320124 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538084-bdglw" event={"ID":"96fa1520-75f6-47bb-bb62-92efc314da9c","Type":"ContainerDied","Data":"f546b1f3c469568c2025454375130e8ce54e4baee9391f9123cca8b844a5aa9f"} Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.329690 4897 generic.go:334] "Generic (PLEG): container finished" podID="34293634-5315-4dac-94b9-258b99c8a9c1" containerID="5755f85ed2d7001e68d6c24610ce541ef32a2cb42accd6d50993cddf43d4b1b8" exitCode=0 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.329788 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4slc" event={"ID":"34293634-5315-4dac-94b9-258b99c8a9c1","Type":"ContainerDied","Data":"5755f85ed2d7001e68d6c24610ce541ef32a2cb42accd6d50993cddf43d4b1b8"} Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.342181 4897 generic.go:334] "Generic (PLEG): container finished" podID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerID="4994dd60d143fcfaa264c8c734e55e4436e67e3a3cbc0385b08f9b693918f7dd" exitCode=0 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.342496 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bfpj4" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerName="registry-server" containerID="cri-o://cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675" gracePeriod=30 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.342856 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" event={"ID":"53e254f6-444a-4fd6-8bda-5af18b9d347c","Type":"ContainerDied","Data":"4994dd60d143fcfaa264c8c734e55e4436e67e3a3cbc0385b08f9b693918f7dd"} Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.342900 4897 scope.go:117] "RemoveContainer" containerID="6ac1f66fd5757dd43cee9118f91a051f5e21d550eacaf70a93ae6067aaab7569" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.343280 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wj92z" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerName="registry-server" containerID="cri-o://e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a" gracePeriod=30 Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.371339 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.371383 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.406866 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.406920 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.407004 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j626w\" (UniqueName: \"kubernetes.io/projected/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-kube-api-access-j626w\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.408521 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.414589 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.424334 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j626w\" (UniqueName: \"kubernetes.io/projected/b38ea4e8-edc9-4c30-8189-dbcc29bc677e-kube-api-access-j626w\") pod \"marketplace-operator-79b997595-b4nxz\" (UID: \"b38ea4e8-edc9-4c30-8189-dbcc29bc677e\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.489733 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.627100 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.742337 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.752303 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.812201 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcgzv\" (UniqueName: \"kubernetes.io/projected/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-kube-api-access-bcgzv\") pod \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.812349 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-utilities\") pod \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.812376 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-catalog-content\") pod \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\" (UID: \"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.816544 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-utilities" (OuterVolumeSpecName: "utilities") pod "657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" (UID: "657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.821277 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-kube-api-access-bcgzv" (OuterVolumeSpecName: "kube-api-access-bcgzv") pod "657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" (UID: "657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5"). InnerVolumeSpecName "kube-api-access-bcgzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.907518 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" (UID: "657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913098 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-catalog-content\") pod \"34293634-5315-4dac-94b9-258b99c8a9c1\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913150 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-operator-metrics\") pod \"53e254f6-444a-4fd6-8bda-5af18b9d347c\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913179 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-trusted-ca\") pod \"53e254f6-444a-4fd6-8bda-5af18b9d347c\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913206 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-utilities\") pod \"34293634-5315-4dac-94b9-258b99c8a9c1\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913231 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f58g\" (UniqueName: \"kubernetes.io/projected/53e254f6-444a-4fd6-8bda-5af18b9d347c-kube-api-access-6f58g\") pod \"53e254f6-444a-4fd6-8bda-5af18b9d347c\" (UID: \"53e254f6-444a-4fd6-8bda-5af18b9d347c\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913296 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzbzh\" (UniqueName: \"kubernetes.io/projected/34293634-5315-4dac-94b9-258b99c8a9c1-kube-api-access-dzbzh\") pod \"34293634-5315-4dac-94b9-258b99c8a9c1\" (UID: \"34293634-5315-4dac-94b9-258b99c8a9c1\") " Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913599 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913622 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913636 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcgzv\" (UniqueName: \"kubernetes.io/projected/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5-kube-api-access-bcgzv\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913857 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "53e254f6-444a-4fd6-8bda-5af18b9d347c" (UID: "53e254f6-444a-4fd6-8bda-5af18b9d347c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.913956 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-utilities" (OuterVolumeSpecName: "utilities") pod "34293634-5315-4dac-94b9-258b99c8a9c1" (UID: "34293634-5315-4dac-94b9-258b99c8a9c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.916975 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "53e254f6-444a-4fd6-8bda-5af18b9d347c" (UID: "53e254f6-444a-4fd6-8bda-5af18b9d347c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.922674 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e254f6-444a-4fd6-8bda-5af18b9d347c-kube-api-access-6f58g" (OuterVolumeSpecName: "kube-api-access-6f58g") pod "53e254f6-444a-4fd6-8bda-5af18b9d347c" (UID: "53e254f6-444a-4fd6-8bda-5af18b9d347c"). InnerVolumeSpecName "kube-api-access-6f58g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.930083 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34293634-5315-4dac-94b9-258b99c8a9c1-kube-api-access-dzbzh" (OuterVolumeSpecName: "kube-api-access-dzbzh") pod "34293634-5315-4dac-94b9-258b99c8a9c1" (UID: "34293634-5315-4dac-94b9-258b99c8a9c1"). InnerVolumeSpecName "kube-api-access-dzbzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.949984 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34293634-5315-4dac-94b9-258b99c8a9c1" (UID: "34293634-5315-4dac-94b9-258b99c8a9c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:03 crc kubenswrapper[4897]: I0228 13:24:03.979969 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.018371 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzbzh\" (UniqueName: \"kubernetes.io/projected/34293634-5315-4dac-94b9-258b99c8a9c1-kube-api-access-dzbzh\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.018409 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.018422 4897 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.018435 4897 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e254f6-444a-4fd6-8bda-5af18b9d347c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.018447 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34293634-5315-4dac-94b9-258b99c8a9c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.018458 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f58g\" (UniqueName: \"kubernetes.io/projected/53e254f6-444a-4fd6-8bda-5af18b9d347c-kube-api-access-6f58g\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.043123 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.117382 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b4nxz"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.119238 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-catalog-content\") pod \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.119382 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz4l9\" (UniqueName: \"kubernetes.io/projected/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-kube-api-access-gz4l9\") pod \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.119453 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-utilities\") pod \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\" (UID: \"1acb2f9f-f650-4f19-965e-48ba5a1ddac2\") " Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.120549 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-utilities" (OuterVolumeSpecName: "utilities") pod "1acb2f9f-f650-4f19-965e-48ba5a1ddac2" (UID: "1acb2f9f-f650-4f19-965e-48ba5a1ddac2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:04 crc kubenswrapper[4897]: W0228 13:24:04.120658 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb38ea4e8_edc9_4c30_8189_dbcc29bc677e.slice/crio-b2204942b84c17a8ba329eb3ceced80d0caa559869d0659099a98d8664da6318 WatchSource:0}: Error finding container b2204942b84c17a8ba329eb3ceced80d0caa559869d0659099a98d8664da6318: Status 404 returned error can't find the container with id b2204942b84c17a8ba329eb3ceced80d0caa559869d0659099a98d8664da6318 Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.126623 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-kube-api-access-gz4l9" (OuterVolumeSpecName: "kube-api-access-gz4l9") pod "1acb2f9f-f650-4f19-965e-48ba5a1ddac2" (UID: "1acb2f9f-f650-4f19-965e-48ba5a1ddac2"). InnerVolumeSpecName "kube-api-access-gz4l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.220863 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94tpx\" (UniqueName: \"kubernetes.io/projected/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-kube-api-access-94tpx\") pod \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.220969 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-catalog-content\") pod \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.221024 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-utilities\") pod \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\" (UID: \"c752ba9a-f6f8-4530-91a9-c06ff609e9d8\") " Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.221411 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz4l9\" (UniqueName: \"kubernetes.io/projected/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-kube-api-access-gz4l9\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.221433 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.221767 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-utilities" (OuterVolumeSpecName: "utilities") pod "c752ba9a-f6f8-4530-91a9-c06ff609e9d8" (UID: "c752ba9a-f6f8-4530-91a9-c06ff609e9d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.224037 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-kube-api-access-94tpx" (OuterVolumeSpecName: "kube-api-access-94tpx") pod "c752ba9a-f6f8-4530-91a9-c06ff609e9d8" (UID: "c752ba9a-f6f8-4530-91a9-c06ff609e9d8"). InnerVolumeSpecName "kube-api-access-94tpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.250574 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1acb2f9f-f650-4f19-965e-48ba5a1ddac2" (UID: "1acb2f9f-f650-4f19-965e-48ba5a1ddac2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.274785 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c752ba9a-f6f8-4530-91a9-c06ff609e9d8" (UID: "c752ba9a-f6f8-4530-91a9-c06ff609e9d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.323842 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1acb2f9f-f650-4f19-965e-48ba5a1ddac2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.323876 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94tpx\" (UniqueName: \"kubernetes.io/projected/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-kube-api-access-94tpx\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.323912 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.323924 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c752ba9a-f6f8-4530-91a9-c06ff609e9d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.349879 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9d2n" event={"ID":"657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5","Type":"ContainerDied","Data":"99dffa22991c224f4f7f8f25447344f24fa08ff7dffd1e5b80d8352af2ce25ae"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.349943 4897 scope.go:117] "RemoveContainer" containerID="ce4cbc9b3c4faed6042191200756ad2b2cc18b61d8f9e03d09067535b16a9a92" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.350037 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9d2n" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.354248 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" event={"ID":"b38ea4e8-edc9-4c30-8189-dbcc29bc677e","Type":"ContainerStarted","Data":"404b6596e4f790f3e4d591bc17a1ef00a27800db580da642902d1fa606b879af"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.354301 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" event={"ID":"b38ea4e8-edc9-4c30-8189-dbcc29bc677e","Type":"ContainerStarted","Data":"b2204942b84c17a8ba329eb3ceced80d0caa559869d0659099a98d8664da6318"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.355224 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.357506 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b4nxz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.77:8080/healthz\": dial tcp 10.217.0.77:8080: connect: connection refused" start-of-body= Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.357601 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" podUID="b38ea4e8-edc9-4c30-8189-dbcc29bc677e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.77:8080/healthz\": dial tcp 10.217.0.77:8080: connect: connection refused" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.360723 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4slc" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.360816 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4slc" event={"ID":"34293634-5315-4dac-94b9-258b99c8a9c1","Type":"ContainerDied","Data":"b121a8136e77ff642b674473b8a4601a6b70cb3d60c62cde801c44823a9e16b9"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.366137 4897 generic.go:334] "Generic (PLEG): container finished" podID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerID="e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a" exitCode=0 Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.366185 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj92z" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.366265 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj92z" event={"ID":"1acb2f9f-f650-4f19-965e-48ba5a1ddac2","Type":"ContainerDied","Data":"e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.366296 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj92z" event={"ID":"1acb2f9f-f650-4f19-965e-48ba5a1ddac2","Type":"ContainerDied","Data":"1b47d95db96a46929c8fdf1921bccd0d9804289caa51c990f3ef54460a7a7bbe"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.375825 4897 generic.go:334] "Generic (PLEG): container finished" podID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerID="cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675" exitCode=0 Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.375880 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpj4" event={"ID":"c752ba9a-f6f8-4530-91a9-c06ff609e9d8","Type":"ContainerDied","Data":"cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.375905 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpj4" event={"ID":"c752ba9a-f6f8-4530-91a9-c06ff609e9d8","Type":"ContainerDied","Data":"a6e627d3c5553a6c72a551dab57427969f6d6bb056fba61f1414020ee2a972be"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.375961 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfpj4" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.378219 4897 scope.go:117] "RemoveContainer" containerID="36bee71be4b1a87f45a58177cecb959e13c93e33604e5adc5308fed3e67f5415" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.380394 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" event={"ID":"53e254f6-444a-4fd6-8bda-5af18b9d347c","Type":"ContainerDied","Data":"1e149e6dcf11f9d15f52f1867523fb1bd5c6768b35e72c39614d2e3a86b1d1e6"} Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.380469 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l7m8v" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.387673 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" podStartSLOduration=1.387654291 podStartE2EDuration="1.387654291s" podCreationTimestamp="2026-02-28 13:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:24:04.38726982 +0000 UTC m=+458.629590477" watchObservedRunningTime="2026-02-28 13:24:04.387654291 +0000 UTC m=+458.629974958" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.411103 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q9d2n"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.416129 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q9d2n"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.430366 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wj92z"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.437927 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wj92z"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.442166 4897 scope.go:117] "RemoveContainer" containerID="edb89e8b7a19fbedcfd8a1ba8ccf4cff8d5b11db8f33a0abbff954a46c31e17f" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.466441 4897 scope.go:117] "RemoveContainer" containerID="5755f85ed2d7001e68d6c24610ce541ef32a2cb42accd6d50993cddf43d4b1b8" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.472550 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" path="/var/lib/kubelet/pods/1acb2f9f-f650-4f19-965e-48ba5a1ddac2/volumes" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.473118 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" path="/var/lib/kubelet/pods/657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5/volumes" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.473604 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l7m8v"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.473630 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l7m8v"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.479226 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4slc"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.488673 4897 scope.go:117] "RemoveContainer" containerID="f3de6218310becb4e4ff8696eb60aa03364152b5e6c0cf43d9b7c7fde154684e" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.508255 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4slc"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.520394 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfpj4"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.520446 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bfpj4"] Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.544035 4897 scope.go:117] "RemoveContainer" containerID="06f84c36443935ec3e67baf28833ecd925caf77ef75595dcde049cc0a869d4c1" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.574678 4897 scope.go:117] "RemoveContainer" containerID="e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.589490 4897 scope.go:117] "RemoveContainer" containerID="8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.616671 4897 scope.go:117] "RemoveContainer" containerID="62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.636703 4897 scope.go:117] "RemoveContainer" containerID="e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a" Feb 28 13:24:04 crc kubenswrapper[4897]: E0228 13:24:04.637179 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a\": container with ID starting with e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a not found: ID does not exist" containerID="e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.637202 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a"} err="failed to get container status \"e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a\": rpc error: code = NotFound desc = could not find container \"e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a\": container with ID starting with e14dcce630b3a9a08a22b931547ea990096f8d21cb8429298e0a5c1ea1f89c1a not found: ID does not exist" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.637221 4897 scope.go:117] "RemoveContainer" containerID="8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583" Feb 28 13:24:04 crc kubenswrapper[4897]: E0228 13:24:04.637787 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583\": container with ID starting with 8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583 not found: ID does not exist" containerID="8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.637803 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583"} err="failed to get container status \"8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583\": rpc error: code = NotFound desc = could not find container \"8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583\": container with ID starting with 8d230ce58b0212a0b961aae6a2ceaff71f13d1181b800e359d527c8a79fd1583 not found: ID does not exist" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.637814 4897 scope.go:117] "RemoveContainer" containerID="62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6" Feb 28 13:24:04 crc kubenswrapper[4897]: E0228 13:24:04.638079 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6\": container with ID starting with 62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6 not found: ID does not exist" containerID="62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.638104 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6"} err="failed to get container status \"62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6\": rpc error: code = NotFound desc = could not find container \"62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6\": container with ID starting with 62ac1d9deefbf9a5c18eb6d04406472f51295d31ddd9ded0535797c83ca081f6 not found: ID does not exist" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.638119 4897 scope.go:117] "RemoveContainer" containerID="cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.652274 4897 scope.go:117] "RemoveContainer" containerID="6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.668181 4897 scope.go:117] "RemoveContainer" containerID="397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.683323 4897 scope.go:117] "RemoveContainer" containerID="cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675" Feb 28 13:24:04 crc kubenswrapper[4897]: E0228 13:24:04.684567 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675\": container with ID starting with cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675 not found: ID does not exist" containerID="cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.684599 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675"} err="failed to get container status \"cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675\": rpc error: code = NotFound desc = could not find container \"cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675\": container with ID starting with cd39bb3d297b5b66d8aee57ad0322868fc625c121670b42703174b5ad292d675 not found: ID does not exist" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.684619 4897 scope.go:117] "RemoveContainer" containerID="6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db" Feb 28 13:24:04 crc kubenswrapper[4897]: E0228 13:24:04.684975 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db\": container with ID starting with 6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db not found: ID does not exist" containerID="6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.684996 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db"} err="failed to get container status \"6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db\": rpc error: code = NotFound desc = could not find container \"6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db\": container with ID starting with 6d7e1bb4c48da64e1d0dfd324933f6e569728f0db044d3e66dd1e06b0ad661db not found: ID does not exist" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.685008 4897 scope.go:117] "RemoveContainer" containerID="397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa" Feb 28 13:24:04 crc kubenswrapper[4897]: E0228 13:24:04.685234 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa\": container with ID starting with 397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa not found: ID does not exist" containerID="397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.685253 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa"} err="failed to get container status \"397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa\": rpc error: code = NotFound desc = could not find container \"397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa\": container with ID starting with 397ca9b24fac78e06d28aab8816987851cdbbbd8bcd94bd6a4d47eda114f87fa not found: ID does not exist" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.685265 4897 scope.go:117] "RemoveContainer" containerID="4994dd60d143fcfaa264c8c734e55e4436e67e3a3cbc0385b08f9b693918f7dd" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.759604 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538084-bdglw" Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.931371 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cx99\" (UniqueName: \"kubernetes.io/projected/96fa1520-75f6-47bb-bb62-92efc314da9c-kube-api-access-8cx99\") pod \"96fa1520-75f6-47bb-bb62-92efc314da9c\" (UID: \"96fa1520-75f6-47bb-bb62-92efc314da9c\") " Feb 28 13:24:04 crc kubenswrapper[4897]: I0228 13:24:04.938057 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96fa1520-75f6-47bb-bb62-92efc314da9c-kube-api-access-8cx99" (OuterVolumeSpecName: "kube-api-access-8cx99") pod "96fa1520-75f6-47bb-bb62-92efc314da9c" (UID: "96fa1520-75f6-47bb-bb62-92efc314da9c"). InnerVolumeSpecName "kube-api-access-8cx99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.032876 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cx99\" (UniqueName: \"kubernetes.io/projected/96fa1520-75f6-47bb-bb62-92efc314da9c-kube-api-access-8cx99\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310172 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wjmzz"] Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310365 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310380 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310390 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="extract-utilities" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310396 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="extract-utilities" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310406 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerName="extract-utilities" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310412 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerName="extract-utilities" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310420 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310427 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310433 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310439 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310448 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="extract-content" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310453 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="extract-content" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310463 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310468 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310475 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="extract-content" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310480 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="extract-content" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310490 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerName="extract-content" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310496 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerName="extract-content" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310504 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="extract-utilities" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310510 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="extract-utilities" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310515 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerName="extract-content" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310520 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerName="extract-content" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310528 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96fa1520-75f6-47bb-bb62-92efc314da9c" containerName="oc" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310533 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fa1520-75f6-47bb-bb62-92efc314da9c" containerName="oc" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310541 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310547 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310558 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerName="extract-utilities" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310563 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerName="extract-utilities" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310643 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310652 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="657eb4a5-6cf3-4d0e-bc8a-49d5eddda1a5" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310662 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="96fa1520-75f6-47bb-bb62-92efc314da9c" containerName="oc" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310670 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310678 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310685 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310695 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1acb2f9f-f650-4f19-965e-48ba5a1ddac2" containerName="registry-server" Feb 28 13:24:05 crc kubenswrapper[4897]: E0228 13:24:05.310771 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.310778 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" containerName="marketplace-operator" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.312442 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.315647 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.364018 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wjmzz"] Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.364850 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gms2c\" (UniqueName: \"kubernetes.io/projected/f60bfd3b-75e8-49ec-bc18-32660c88045d-kube-api-access-gms2c\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.364880 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60bfd3b-75e8-49ec-bc18-32660c88045d-catalog-content\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.364944 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60bfd3b-75e8-49ec-bc18-32660c88045d-utilities\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.392820 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538084-bdglw" event={"ID":"96fa1520-75f6-47bb-bb62-92efc314da9c","Type":"ContainerDied","Data":"a514b150aa222b182dd52ac36af8bb6e51c3518b7742d94ed0c5568edfcd67f5"} Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.392863 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a514b150aa222b182dd52ac36af8bb6e51c3518b7742d94ed0c5568edfcd67f5" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.393340 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538084-bdglw" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.397972 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b4nxz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.465697 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60bfd3b-75e8-49ec-bc18-32660c88045d-utilities\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.466040 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gms2c\" (UniqueName: \"kubernetes.io/projected/f60bfd3b-75e8-49ec-bc18-32660c88045d-kube-api-access-gms2c\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.466096 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60bfd3b-75e8-49ec-bc18-32660c88045d-catalog-content\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.466427 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60bfd3b-75e8-49ec-bc18-32660c88045d-utilities\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.466530 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60bfd3b-75e8-49ec-bc18-32660c88045d-catalog-content\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.493039 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gms2c\" (UniqueName: \"kubernetes.io/projected/f60bfd3b-75e8-49ec-bc18-32660c88045d-kube-api-access-gms2c\") pod \"certified-operators-wjmzz\" (UID: \"f60bfd3b-75e8-49ec-bc18-32660c88045d\") " pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.508493 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vrtf6"] Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.509443 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.511833 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.520570 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrtf6"] Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.567650 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkchr\" (UniqueName: \"kubernetes.io/projected/35856bb5-8436-497d-a4c1-2dac4df4a552-kube-api-access-mkchr\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.567718 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35856bb5-8436-497d-a4c1-2dac4df4a552-utilities\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.567776 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35856bb5-8436-497d-a4c1-2dac4df4a552-catalog-content\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.631505 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.669165 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkchr\" (UniqueName: \"kubernetes.io/projected/35856bb5-8436-497d-a4c1-2dac4df4a552-kube-api-access-mkchr\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.669499 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35856bb5-8436-497d-a4c1-2dac4df4a552-utilities\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.669571 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35856bb5-8436-497d-a4c1-2dac4df4a552-catalog-content\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.670408 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35856bb5-8436-497d-a4c1-2dac4df4a552-utilities\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.670585 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35856bb5-8436-497d-a4c1-2dac4df4a552-catalog-content\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.686824 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkchr\" (UniqueName: \"kubernetes.io/projected/35856bb5-8436-497d-a4c1-2dac4df4a552-kube-api-access-mkchr\") pod \"community-operators-vrtf6\" (UID: \"35856bb5-8436-497d-a4c1-2dac4df4a552\") " pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.816630 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538078-hj8mj"] Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.817800 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538078-hj8mj"] Feb 28 13:24:05 crc kubenswrapper[4897]: I0228 13:24:05.840190 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:06 crc kubenswrapper[4897]: I0228 13:24:06.046868 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wjmzz"] Feb 28 13:24:06 crc kubenswrapper[4897]: W0228 13:24:06.051066 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf60bfd3b_75e8_49ec_bc18_32660c88045d.slice/crio-e261ab0d4c36f94110ad839d9117929540ffe1df69e799b9beaf828a017ced38 WatchSource:0}: Error finding container e261ab0d4c36f94110ad839d9117929540ffe1df69e799b9beaf828a017ced38: Status 404 returned error can't find the container with id e261ab0d4c36f94110ad839d9117929540ffe1df69e799b9beaf828a017ced38 Feb 28 13:24:06 crc kubenswrapper[4897]: I0228 13:24:06.248140 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrtf6"] Feb 28 13:24:06 crc kubenswrapper[4897]: W0228 13:24:06.309299 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35856bb5_8436_497d_a4c1_2dac4df4a552.slice/crio-93a4b46d29442b0323cf4f026dff9218868c3da16ec961d0e44468ee542e0620 WatchSource:0}: Error finding container 93a4b46d29442b0323cf4f026dff9218868c3da16ec961d0e44468ee542e0620: Status 404 returned error can't find the container with id 93a4b46d29442b0323cf4f026dff9218868c3da16ec961d0e44468ee542e0620 Feb 28 13:24:06 crc kubenswrapper[4897]: I0228 13:24:06.405579 4897 generic.go:334] "Generic (PLEG): container finished" podID="f60bfd3b-75e8-49ec-bc18-32660c88045d" containerID="940899525fd6557eaeceec356db203feded43af98062c55632447e2420605f8f" exitCode=0 Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:06.406713 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmzz" event={"ID":"f60bfd3b-75e8-49ec-bc18-32660c88045d","Type":"ContainerDied","Data":"940899525fd6557eaeceec356db203feded43af98062c55632447e2420605f8f"} Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:06.406750 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmzz" event={"ID":"f60bfd3b-75e8-49ec-bc18-32660c88045d","Type":"ContainerStarted","Data":"e261ab0d4c36f94110ad839d9117929540ffe1df69e799b9beaf828a017ced38"} Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:06.409152 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtf6" event={"ID":"35856bb5-8436-497d-a4c1-2dac4df4a552","Type":"ContainerStarted","Data":"93a4b46d29442b0323cf4f026dff9218868c3da16ec961d0e44468ee542e0620"} Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:06.463509 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34293634-5315-4dac-94b9-258b99c8a9c1" path="/var/lib/kubelet/pods/34293634-5315-4dac-94b9-258b99c8a9c1/volumes" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:06.464413 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e254f6-444a-4fd6-8bda-5af18b9d347c" path="/var/lib/kubelet/pods/53e254f6-444a-4fd6-8bda-5af18b9d347c/volumes" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:06.465546 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79743a51-c0b2-45b2-99d3-385e0b2f2c6f" path="/var/lib/kubelet/pods/79743a51-c0b2-45b2-99d3-385e0b2f2c6f/volumes" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:06.468360 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c752ba9a-f6f8-4530-91a9-c06ff609e9d8" path="/var/lib/kubelet/pods/c752ba9a-f6f8-4530-91a9-c06ff609e9d8/volumes" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.418464 4897 generic.go:334] "Generic (PLEG): container finished" podID="f60bfd3b-75e8-49ec-bc18-32660c88045d" containerID="1da0e73aa3cdc0cef8e0ca99e410fc1e1dee50db645840a5e74b6a9b8e7d4d14" exitCode=0 Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.418521 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmzz" event={"ID":"f60bfd3b-75e8-49ec-bc18-32660c88045d","Type":"ContainerDied","Data":"1da0e73aa3cdc0cef8e0ca99e410fc1e1dee50db645840a5e74b6a9b8e7d4d14"} Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.421207 4897 generic.go:334] "Generic (PLEG): container finished" podID="35856bb5-8436-497d-a4c1-2dac4df4a552" containerID="a381934f8273dce6530f441436e2fd9d16b222174d10c1e1bde23b491570508d" exitCode=0 Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.421261 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtf6" event={"ID":"35856bb5-8436-497d-a4c1-2dac4df4a552","Type":"ContainerDied","Data":"a381934f8273dce6530f441436e2fd9d16b222174d10c1e1bde23b491570508d"} Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.717637 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fhsw4"] Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.719728 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.722377 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.735258 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhsw4"] Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.910361 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-catalog-content\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.910450 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-utilities\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.910564 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4d5g\" (UniqueName: \"kubernetes.io/projected/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-kube-api-access-k4d5g\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.917743 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tr46p"] Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.918763 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.920825 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 13:24:07 crc kubenswrapper[4897]: I0228 13:24:07.931883 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr46p"] Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.011807 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4d5g\" (UniqueName: \"kubernetes.io/projected/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-kube-api-access-k4d5g\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.011860 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-catalog-content\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.011883 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-utilities\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.012242 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-utilities\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.012304 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-catalog-content\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.038139 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4d5g\" (UniqueName: \"kubernetes.io/projected/f72e233a-6e31-4ca5-b12e-3c4213a80ad6-kube-api-access-k4d5g\") pod \"redhat-marketplace-fhsw4\" (UID: \"f72e233a-6e31-4ca5-b12e-3c4213a80ad6\") " pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.075987 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.113681 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5fzx\" (UniqueName: \"kubernetes.io/projected/54378322-d915-43c1-a3d9-837fd5b9121d-kube-api-access-h5fzx\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.113971 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54378322-d915-43c1-a3d9-837fd5b9121d-catalog-content\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.114133 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54378322-d915-43c1-a3d9-837fd5b9121d-utilities\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.215687 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54378322-d915-43c1-a3d9-837fd5b9121d-catalog-content\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.216057 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54378322-d915-43c1-a3d9-837fd5b9121d-utilities\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.216099 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5fzx\" (UniqueName: \"kubernetes.io/projected/54378322-d915-43c1-a3d9-837fd5b9121d-kube-api-access-h5fzx\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.216909 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54378322-d915-43c1-a3d9-837fd5b9121d-catalog-content\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.217325 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54378322-d915-43c1-a3d9-837fd5b9121d-utilities\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.241779 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5fzx\" (UniqueName: \"kubernetes.io/projected/54378322-d915-43c1-a3d9-837fd5b9121d-kube-api-access-h5fzx\") pod \"redhat-operators-tr46p\" (UID: \"54378322-d915-43c1-a3d9-837fd5b9121d\") " pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.289446 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.429231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtf6" event={"ID":"35856bb5-8436-497d-a4c1-2dac4df4a552","Type":"ContainerStarted","Data":"df90c181436914cde4d8ffd11412a457547ae65d955ad1c015758d242bc65980"} Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.432472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmzz" event={"ID":"f60bfd3b-75e8-49ec-bc18-32660c88045d","Type":"ContainerStarted","Data":"b118e3ba62e058db8d0dd6789fef1071fd55b7a44b61f73dd55414292bca7eea"} Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.472150 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wjmzz" podStartSLOduration=2.043700245 podStartE2EDuration="3.472127599s" podCreationTimestamp="2026-02-28 13:24:05 +0000 UTC" firstStartedPulling="2026-02-28 13:24:06.408293146 +0000 UTC m=+460.650613803" lastFinishedPulling="2026-02-28 13:24:07.83672047 +0000 UTC m=+462.079041157" observedRunningTime="2026-02-28 13:24:08.469448321 +0000 UTC m=+462.711768978" watchObservedRunningTime="2026-02-28 13:24:08.472127599 +0000 UTC m=+462.714448256" Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.496690 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhsw4"] Feb 28 13:24:08 crc kubenswrapper[4897]: W0228 13:24:08.533659 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf72e233a_6e31_4ca5_b12e_3c4213a80ad6.slice/crio-b04c383bbbea0d857f98609a7a382bdfb7e2eef9ca4b76ed600b0777533b7f7d WatchSource:0}: Error finding container b04c383bbbea0d857f98609a7a382bdfb7e2eef9ca4b76ed600b0777533b7f7d: Status 404 returned error can't find the container with id b04c383bbbea0d857f98609a7a382bdfb7e2eef9ca4b76ed600b0777533b7f7d Feb 28 13:24:08 crc kubenswrapper[4897]: I0228 13:24:08.690460 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr46p"] Feb 28 13:24:08 crc kubenswrapper[4897]: W0228 13:24:08.725962 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54378322_d915_43c1_a3d9_837fd5b9121d.slice/crio-2b483065d172988861dc11aec2545e422eee004557d4242020b4a8b3e8651667 WatchSource:0}: Error finding container 2b483065d172988861dc11aec2545e422eee004557d4242020b4a8b3e8651667: Status 404 returned error can't find the container with id 2b483065d172988861dc11aec2545e422eee004557d4242020b4a8b3e8651667 Feb 28 13:24:09 crc kubenswrapper[4897]: I0228 13:24:09.440897 4897 generic.go:334] "Generic (PLEG): container finished" podID="54378322-d915-43c1-a3d9-837fd5b9121d" containerID="eee0311235f7b84647419b674222d0a9d096ba228c1ce5abb592b52f06890a47" exitCode=0 Feb 28 13:24:09 crc kubenswrapper[4897]: I0228 13:24:09.440984 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr46p" event={"ID":"54378322-d915-43c1-a3d9-837fd5b9121d","Type":"ContainerDied","Data":"eee0311235f7b84647419b674222d0a9d096ba228c1ce5abb592b52f06890a47"} Feb 28 13:24:09 crc kubenswrapper[4897]: I0228 13:24:09.441064 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr46p" event={"ID":"54378322-d915-43c1-a3d9-837fd5b9121d","Type":"ContainerStarted","Data":"2b483065d172988861dc11aec2545e422eee004557d4242020b4a8b3e8651667"} Feb 28 13:24:09 crc kubenswrapper[4897]: I0228 13:24:09.446516 4897 generic.go:334] "Generic (PLEG): container finished" podID="f72e233a-6e31-4ca5-b12e-3c4213a80ad6" containerID="7bcef2ed4fd92c402a4ed9fd0a2843a21e765ee67a01a4412393ef4a38fedd7a" exitCode=0 Feb 28 13:24:09 crc kubenswrapper[4897]: I0228 13:24:09.446649 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhsw4" event={"ID":"f72e233a-6e31-4ca5-b12e-3c4213a80ad6","Type":"ContainerDied","Data":"7bcef2ed4fd92c402a4ed9fd0a2843a21e765ee67a01a4412393ef4a38fedd7a"} Feb 28 13:24:09 crc kubenswrapper[4897]: I0228 13:24:09.446696 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhsw4" event={"ID":"f72e233a-6e31-4ca5-b12e-3c4213a80ad6","Type":"ContainerStarted","Data":"b04c383bbbea0d857f98609a7a382bdfb7e2eef9ca4b76ed600b0777533b7f7d"} Feb 28 13:24:09 crc kubenswrapper[4897]: I0228 13:24:09.453689 4897 generic.go:334] "Generic (PLEG): container finished" podID="35856bb5-8436-497d-a4c1-2dac4df4a552" containerID="df90c181436914cde4d8ffd11412a457547ae65d955ad1c015758d242bc65980" exitCode=0 Feb 28 13:24:09 crc kubenswrapper[4897]: I0228 13:24:09.453848 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtf6" event={"ID":"35856bb5-8436-497d-a4c1-2dac4df4a552","Type":"ContainerDied","Data":"df90c181436914cde4d8ffd11412a457547ae65d955ad1c015758d242bc65980"} Feb 28 13:24:10 crc kubenswrapper[4897]: I0228 13:24:10.479853 4897 generic.go:334] "Generic (PLEG): container finished" podID="f72e233a-6e31-4ca5-b12e-3c4213a80ad6" containerID="3ea9a7a2824aee3e82742a5497f337474256ca18a9016c5aba868f869dbf7f6a" exitCode=0 Feb 28 13:24:10 crc kubenswrapper[4897]: I0228 13:24:10.495494 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtf6" event={"ID":"35856bb5-8436-497d-a4c1-2dac4df4a552","Type":"ContainerStarted","Data":"1ab13790db0a041145489716aae83fb8f09ecb5e2ac5394c1112791491b298bc"} Feb 28 13:24:10 crc kubenswrapper[4897]: I0228 13:24:10.495625 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhsw4" event={"ID":"f72e233a-6e31-4ca5-b12e-3c4213a80ad6","Type":"ContainerDied","Data":"3ea9a7a2824aee3e82742a5497f337474256ca18a9016c5aba868f869dbf7f6a"} Feb 28 13:24:10 crc kubenswrapper[4897]: I0228 13:24:10.503782 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vrtf6" podStartSLOduration=3.070066728 podStartE2EDuration="5.503756673s" podCreationTimestamp="2026-02-28 13:24:05 +0000 UTC" firstStartedPulling="2026-02-28 13:24:07.423457655 +0000 UTC m=+461.665778342" lastFinishedPulling="2026-02-28 13:24:09.8571476 +0000 UTC m=+464.099468287" observedRunningTime="2026-02-28 13:24:10.498071238 +0000 UTC m=+464.740391925" watchObservedRunningTime="2026-02-28 13:24:10.503756673 +0000 UTC m=+464.746077370" Feb 28 13:24:11 crc kubenswrapper[4897]: I0228 13:24:11.488974 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhsw4" event={"ID":"f72e233a-6e31-4ca5-b12e-3c4213a80ad6","Type":"ContainerStarted","Data":"13a54f65dd760bde9f753eb4d1cc97f14472a770ff96c07ee2167efc23d86d09"} Feb 28 13:24:11 crc kubenswrapper[4897]: I0228 13:24:11.491365 4897 generic.go:334] "Generic (PLEG): container finished" podID="54378322-d915-43c1-a3d9-837fd5b9121d" containerID="e3f6e7cb4004957ad8099a177ba56ffd3fc14833ab7ebe53006d498710d8a837" exitCode=0 Feb 28 13:24:11 crc kubenswrapper[4897]: I0228 13:24:11.491455 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr46p" event={"ID":"54378322-d915-43c1-a3d9-837fd5b9121d","Type":"ContainerDied","Data":"e3f6e7cb4004957ad8099a177ba56ffd3fc14833ab7ebe53006d498710d8a837"} Feb 28 13:24:11 crc kubenswrapper[4897]: I0228 13:24:11.507725 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fhsw4" podStartSLOduration=3.035684274 podStartE2EDuration="4.507706736s" podCreationTimestamp="2026-02-28 13:24:07 +0000 UTC" firstStartedPulling="2026-02-28 13:24:09.447927442 +0000 UTC m=+463.690248099" lastFinishedPulling="2026-02-28 13:24:10.919949894 +0000 UTC m=+465.162270561" observedRunningTime="2026-02-28 13:24:11.506364997 +0000 UTC m=+465.748685664" watchObservedRunningTime="2026-02-28 13:24:11.507706736 +0000 UTC m=+465.750027393" Feb 28 13:24:12 crc kubenswrapper[4897]: I0228 13:24:12.498423 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr46p" event={"ID":"54378322-d915-43c1-a3d9-837fd5b9121d","Type":"ContainerStarted","Data":"2aced8c555467dc387bb4af424f1013774709edd511fc76152e46747ceb1ecc7"} Feb 28 13:24:15 crc kubenswrapper[4897]: I0228 13:24:15.632494 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:15 crc kubenswrapper[4897]: I0228 13:24:15.632867 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:15 crc kubenswrapper[4897]: I0228 13:24:15.705128 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:15 crc kubenswrapper[4897]: I0228 13:24:15.736244 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tr46p" podStartSLOduration=5.957904892 podStartE2EDuration="8.736227004s" podCreationTimestamp="2026-02-28 13:24:07 +0000 UTC" firstStartedPulling="2026-02-28 13:24:09.443148943 +0000 UTC m=+463.685469630" lastFinishedPulling="2026-02-28 13:24:12.221471075 +0000 UTC m=+466.463791742" observedRunningTime="2026-02-28 13:24:12.518679923 +0000 UTC m=+466.761000580" watchObservedRunningTime="2026-02-28 13:24:15.736227004 +0000 UTC m=+469.978547671" Feb 28 13:24:15 crc kubenswrapper[4897]: I0228 13:24:15.841755 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:15 crc kubenswrapper[4897]: I0228 13:24:15.841791 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:15 crc kubenswrapper[4897]: I0228 13:24:15.888954 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:16 crc kubenswrapper[4897]: I0228 13:24:16.564763 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wjmzz" Feb 28 13:24:16 crc kubenswrapper[4897]: I0228 13:24:16.579491 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vrtf6" Feb 28 13:24:18 crc kubenswrapper[4897]: I0228 13:24:18.076600 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:18 crc kubenswrapper[4897]: I0228 13:24:18.076676 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:18 crc kubenswrapper[4897]: I0228 13:24:18.142372 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:18 crc kubenswrapper[4897]: I0228 13:24:18.290186 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:18 crc kubenswrapper[4897]: I0228 13:24:18.290297 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:18 crc kubenswrapper[4897]: I0228 13:24:18.597730 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fhsw4" Feb 28 13:24:19 crc kubenswrapper[4897]: I0228 13:24:19.344116 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tr46p" podUID="54378322-d915-43c1-a3d9-837fd5b9121d" containerName="registry-server" probeResult="failure" output=< Feb 28 13:24:19 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:24:19 crc kubenswrapper[4897]: > Feb 28 13:24:20 crc kubenswrapper[4897]: I0228 13:24:20.114023 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-w2ltv" Feb 28 13:24:20 crc kubenswrapper[4897]: I0228 13:24:20.205383 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k72ms"] Feb 28 13:24:28 crc kubenswrapper[4897]: I0228 13:24:28.354839 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:28 crc kubenswrapper[4897]: I0228 13:24:28.428512 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tr46p" Feb 28 13:24:33 crc kubenswrapper[4897]: I0228 13:24:33.371193 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:24:33 crc kubenswrapper[4897]: I0228 13:24:33.372054 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:24:33 crc kubenswrapper[4897]: I0228 13:24:33.372137 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:24:33 crc kubenswrapper[4897]: I0228 13:24:33.373227 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2066137a00e095b0ce2896f3008520e157182f7fcabc5b0857bfc026f772801b"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:24:33 crc kubenswrapper[4897]: I0228 13:24:33.373408 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://2066137a00e095b0ce2896f3008520e157182f7fcabc5b0857bfc026f772801b" gracePeriod=600 Feb 28 13:24:33 crc kubenswrapper[4897]: I0228 13:24:33.627394 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="2066137a00e095b0ce2896f3008520e157182f7fcabc5b0857bfc026f772801b" exitCode=0 Feb 28 13:24:33 crc kubenswrapper[4897]: I0228 13:24:33.627532 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"2066137a00e095b0ce2896f3008520e157182f7fcabc5b0857bfc026f772801b"} Feb 28 13:24:33 crc kubenswrapper[4897]: I0228 13:24:33.627691 4897 scope.go:117] "RemoveContainer" containerID="da8311a9c42869f106fa19f0c2268e4146c9732ea75e1282d537114b54d8c20d" Feb 28 13:24:34 crc kubenswrapper[4897]: I0228 13:24:34.635125 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"f290e5ce6a8f9eb0ed11d10d65558a545341abdde26a9a87dc391672358e93e3"} Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.255390 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" podUID="5a017c06-8f6f-4638-ae70-2715eb539d7c" containerName="registry" containerID="cri-o://b27bf6c1cececaecd48cd1b5cc3c3ec40cfbca32b0f0eae0d9a95944ccbadee8" gracePeriod=30 Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.705031 4897 generic.go:334] "Generic (PLEG): container finished" podID="5a017c06-8f6f-4638-ae70-2715eb539d7c" containerID="b27bf6c1cececaecd48cd1b5cc3c3ec40cfbca32b0f0eae0d9a95944ccbadee8" exitCode=0 Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.705293 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" event={"ID":"5a017c06-8f6f-4638-ae70-2715eb539d7c","Type":"ContainerDied","Data":"b27bf6c1cececaecd48cd1b5cc3c3ec40cfbca32b0f0eae0d9a95944ccbadee8"} Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.761577 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.843084 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-tls\") pod \"5a017c06-8f6f-4638-ae70-2715eb539d7c\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.843120 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a017c06-8f6f-4638-ae70-2715eb539d7c-installation-pull-secrets\") pod \"5a017c06-8f6f-4638-ae70-2715eb539d7c\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.843147 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-trusted-ca\") pod \"5a017c06-8f6f-4638-ae70-2715eb539d7c\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.843180 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-certificates\") pod \"5a017c06-8f6f-4638-ae70-2715eb539d7c\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.843199 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmsrw\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-kube-api-access-wmsrw\") pod \"5a017c06-8f6f-4638-ae70-2715eb539d7c\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.843219 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a017c06-8f6f-4638-ae70-2715eb539d7c-ca-trust-extracted\") pod \"5a017c06-8f6f-4638-ae70-2715eb539d7c\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.843234 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-bound-sa-token\") pod \"5a017c06-8f6f-4638-ae70-2715eb539d7c\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.843354 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5a017c06-8f6f-4638-ae70-2715eb539d7c\" (UID: \"5a017c06-8f6f-4638-ae70-2715eb539d7c\") " Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.844119 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5a017c06-8f6f-4638-ae70-2715eb539d7c" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.844330 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5a017c06-8f6f-4638-ae70-2715eb539d7c" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.851570 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5a017c06-8f6f-4638-ae70-2715eb539d7c" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.852191 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-kube-api-access-wmsrw" (OuterVolumeSpecName: "kube-api-access-wmsrw") pod "5a017c06-8f6f-4638-ae70-2715eb539d7c" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c"). InnerVolumeSpecName "kube-api-access-wmsrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.852690 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5a017c06-8f6f-4638-ae70-2715eb539d7c" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.857255 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5a017c06-8f6f-4638-ae70-2715eb539d7c" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.863949 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a017c06-8f6f-4638-ae70-2715eb539d7c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5a017c06-8f6f-4638-ae70-2715eb539d7c" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.864967 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a017c06-8f6f-4638-ae70-2715eb539d7c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5a017c06-8f6f-4638-ae70-2715eb539d7c" (UID: "5a017c06-8f6f-4638-ae70-2715eb539d7c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.944409 4897 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.944494 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmsrw\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-kube-api-access-wmsrw\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.944515 4897 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a017c06-8f6f-4638-ae70-2715eb539d7c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.944615 4897 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.944632 4897 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a017c06-8f6f-4638-ae70-2715eb539d7c-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.944686 4897 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a017c06-8f6f-4638-ae70-2715eb539d7c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:45 crc kubenswrapper[4897]: I0228 13:24:45.944704 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a017c06-8f6f-4638-ae70-2715eb539d7c-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:24:46 crc kubenswrapper[4897]: I0228 13:24:46.715472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" event={"ID":"5a017c06-8f6f-4638-ae70-2715eb539d7c","Type":"ContainerDied","Data":"6a3aeb07bfe9f8d9907db70a3428fcda8f0d4ea8de442aa93865bd22c176d8d0"} Feb 28 13:24:46 crc kubenswrapper[4897]: I0228 13:24:46.715524 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k72ms" Feb 28 13:24:46 crc kubenswrapper[4897]: I0228 13:24:46.715546 4897 scope.go:117] "RemoveContainer" containerID="b27bf6c1cececaecd48cd1b5cc3c3ec40cfbca32b0f0eae0d9a95944ccbadee8" Feb 28 13:24:46 crc kubenswrapper[4897]: I0228 13:24:46.732285 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k72ms"] Feb 28 13:24:46 crc kubenswrapper[4897]: I0228 13:24:46.748014 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k72ms"] Feb 28 13:24:48 crc kubenswrapper[4897]: I0228 13:24:48.464419 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a017c06-8f6f-4638-ae70-2715eb539d7c" path="/var/lib/kubelet/pods/5a017c06-8f6f-4638-ae70-2715eb539d7c/volumes" Feb 28 13:25:53 crc kubenswrapper[4897]: E0228 13:25:53.622174 4897 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.167s" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.139023 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538086-qkgqc"] Feb 28 13:26:00 crc kubenswrapper[4897]: E0228 13:26:00.140024 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a017c06-8f6f-4638-ae70-2715eb539d7c" containerName="registry" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.140039 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a017c06-8f6f-4638-ae70-2715eb539d7c" containerName="registry" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.140162 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a017c06-8f6f-4638-ae70-2715eb539d7c" containerName="registry" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.140789 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538086-qkgqc" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.142690 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.142929 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.143121 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.151686 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538086-qkgqc"] Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.227875 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mb5q\" (UniqueName: \"kubernetes.io/projected/480b2ad8-c8f7-479c-850b-c49aae2ed568-kube-api-access-5mb5q\") pod \"auto-csr-approver-29538086-qkgqc\" (UID: \"480b2ad8-c8f7-479c-850b-c49aae2ed568\") " pod="openshift-infra/auto-csr-approver-29538086-qkgqc" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.328949 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mb5q\" (UniqueName: \"kubernetes.io/projected/480b2ad8-c8f7-479c-850b-c49aae2ed568-kube-api-access-5mb5q\") pod \"auto-csr-approver-29538086-qkgqc\" (UID: \"480b2ad8-c8f7-479c-850b-c49aae2ed568\") " pod="openshift-infra/auto-csr-approver-29538086-qkgqc" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.351296 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mb5q\" (UniqueName: \"kubernetes.io/projected/480b2ad8-c8f7-479c-850b-c49aae2ed568-kube-api-access-5mb5q\") pod \"auto-csr-approver-29538086-qkgqc\" (UID: \"480b2ad8-c8f7-479c-850b-c49aae2ed568\") " pod="openshift-infra/auto-csr-approver-29538086-qkgqc" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.462499 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538086-qkgqc" Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.936210 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538086-qkgqc"] Feb 28 13:26:00 crc kubenswrapper[4897]: I0228 13:26:00.950701 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 13:26:01 crc kubenswrapper[4897]: I0228 13:26:01.680357 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538086-qkgqc" event={"ID":"480b2ad8-c8f7-479c-850b-c49aae2ed568","Type":"ContainerStarted","Data":"cdc9c44d8f727a1acf7e6e710cca19d063711037bf2fd7967424c535a2a1b6aa"} Feb 28 13:26:02 crc kubenswrapper[4897]: I0228 13:26:02.688719 4897 generic.go:334] "Generic (PLEG): container finished" podID="480b2ad8-c8f7-479c-850b-c49aae2ed568" containerID="9318decd20936c1121212d230c92e977ffa0c5aa0bffb21002d843fac853b8bb" exitCode=0 Feb 28 13:26:02 crc kubenswrapper[4897]: I0228 13:26:02.688767 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538086-qkgqc" event={"ID":"480b2ad8-c8f7-479c-850b-c49aae2ed568","Type":"ContainerDied","Data":"9318decd20936c1121212d230c92e977ffa0c5aa0bffb21002d843fac853b8bb"} Feb 28 13:26:03 crc kubenswrapper[4897]: I0228 13:26:03.964599 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538086-qkgqc" Feb 28 13:26:04 crc kubenswrapper[4897]: I0228 13:26:04.080960 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mb5q\" (UniqueName: \"kubernetes.io/projected/480b2ad8-c8f7-479c-850b-c49aae2ed568-kube-api-access-5mb5q\") pod \"480b2ad8-c8f7-479c-850b-c49aae2ed568\" (UID: \"480b2ad8-c8f7-479c-850b-c49aae2ed568\") " Feb 28 13:26:04 crc kubenswrapper[4897]: I0228 13:26:04.087519 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480b2ad8-c8f7-479c-850b-c49aae2ed568-kube-api-access-5mb5q" (OuterVolumeSpecName: "kube-api-access-5mb5q") pod "480b2ad8-c8f7-479c-850b-c49aae2ed568" (UID: "480b2ad8-c8f7-479c-850b-c49aae2ed568"). InnerVolumeSpecName "kube-api-access-5mb5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:26:04 crc kubenswrapper[4897]: I0228 13:26:04.182754 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mb5q\" (UniqueName: \"kubernetes.io/projected/480b2ad8-c8f7-479c-850b-c49aae2ed568-kube-api-access-5mb5q\") on node \"crc\" DevicePath \"\"" Feb 28 13:26:04 crc kubenswrapper[4897]: I0228 13:26:04.705650 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538086-qkgqc" event={"ID":"480b2ad8-c8f7-479c-850b-c49aae2ed568","Type":"ContainerDied","Data":"cdc9c44d8f727a1acf7e6e710cca19d063711037bf2fd7967424c535a2a1b6aa"} Feb 28 13:26:04 crc kubenswrapper[4897]: I0228 13:26:04.706000 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdc9c44d8f727a1acf7e6e710cca19d063711037bf2fd7967424c535a2a1b6aa" Feb 28 13:26:04 crc kubenswrapper[4897]: I0228 13:26:04.705702 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538086-qkgqc" Feb 28 13:26:05 crc kubenswrapper[4897]: I0228 13:26:05.039545 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538080-qcrrw"] Feb 28 13:26:05 crc kubenswrapper[4897]: I0228 13:26:05.045744 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538080-qcrrw"] Feb 28 13:26:06 crc kubenswrapper[4897]: I0228 13:26:06.467744 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52c7385-4178-4038-93b0-5cd758958e80" path="/var/lib/kubelet/pods/a52c7385-4178-4038-93b0-5cd758958e80/volumes" Feb 28 13:26:33 crc kubenswrapper[4897]: I0228 13:26:33.371073 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:26:33 crc kubenswrapper[4897]: I0228 13:26:33.371730 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:27:03 crc kubenswrapper[4897]: I0228 13:27:03.371447 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:27:03 crc kubenswrapper[4897]: I0228 13:27:03.372101 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:27:26 crc kubenswrapper[4897]: I0228 13:27:26.992185 4897 scope.go:117] "RemoveContainer" containerID="81f88a37fe90da7973932a2d58c459ef49b3e4d51447e7d2ceb262c276716b5a" Feb 28 13:27:27 crc kubenswrapper[4897]: I0228 13:27:27.045354 4897 scope.go:117] "RemoveContainer" containerID="2f2df56771d11b40b5190befed758d1b56777f1de09a0260ea556b12913cb5ba" Feb 28 13:27:33 crc kubenswrapper[4897]: I0228 13:27:33.372106 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:27:33 crc kubenswrapper[4897]: I0228 13:27:33.372947 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:27:33 crc kubenswrapper[4897]: I0228 13:27:33.373025 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:27:33 crc kubenswrapper[4897]: I0228 13:27:33.373974 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f290e5ce6a8f9eb0ed11d10d65558a545341abdde26a9a87dc391672358e93e3"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:27:33 crc kubenswrapper[4897]: I0228 13:27:33.374040 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://f290e5ce6a8f9eb0ed11d10d65558a545341abdde26a9a87dc391672358e93e3" gracePeriod=600 Feb 28 13:27:34 crc kubenswrapper[4897]: I0228 13:27:34.356677 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="f290e5ce6a8f9eb0ed11d10d65558a545341abdde26a9a87dc391672358e93e3" exitCode=0 Feb 28 13:27:34 crc kubenswrapper[4897]: I0228 13:27:34.356795 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"f290e5ce6a8f9eb0ed11d10d65558a545341abdde26a9a87dc391672358e93e3"} Feb 28 13:27:34 crc kubenswrapper[4897]: I0228 13:27:34.357120 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"cfa26661db45aebf66711b46c418e18106a8f8b0c44a8fe4fe4cb2094fde5cf6"} Feb 28 13:27:34 crc kubenswrapper[4897]: I0228 13:27:34.357164 4897 scope.go:117] "RemoveContainer" containerID="2066137a00e095b0ce2896f3008520e157182f7fcabc5b0857bfc026f772801b" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.157856 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538088-4c9j2"] Feb 28 13:28:00 crc kubenswrapper[4897]: E0228 13:28:00.159998 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480b2ad8-c8f7-479c-850b-c49aae2ed568" containerName="oc" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.160026 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="480b2ad8-c8f7-479c-850b-c49aae2ed568" containerName="oc" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.160292 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="480b2ad8-c8f7-479c-850b-c49aae2ed568" containerName="oc" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.160995 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538088-4c9j2" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.164215 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.164230 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.164339 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.171776 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538088-4c9j2"] Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.281112 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txd9d\" (UniqueName: \"kubernetes.io/projected/43e65966-94bd-4c6f-9e02-1d3f10577480-kube-api-access-txd9d\") pod \"auto-csr-approver-29538088-4c9j2\" (UID: \"43e65966-94bd-4c6f-9e02-1d3f10577480\") " pod="openshift-infra/auto-csr-approver-29538088-4c9j2" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.382812 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txd9d\" (UniqueName: \"kubernetes.io/projected/43e65966-94bd-4c6f-9e02-1d3f10577480-kube-api-access-txd9d\") pod \"auto-csr-approver-29538088-4c9j2\" (UID: \"43e65966-94bd-4c6f-9e02-1d3f10577480\") " pod="openshift-infra/auto-csr-approver-29538088-4c9j2" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.417551 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txd9d\" (UniqueName: \"kubernetes.io/projected/43e65966-94bd-4c6f-9e02-1d3f10577480-kube-api-access-txd9d\") pod \"auto-csr-approver-29538088-4c9j2\" (UID: \"43e65966-94bd-4c6f-9e02-1d3f10577480\") " pod="openshift-infra/auto-csr-approver-29538088-4c9j2" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.488853 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538088-4c9j2" Feb 28 13:28:00 crc kubenswrapper[4897]: I0228 13:28:00.722704 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538088-4c9j2"] Feb 28 13:28:01 crc kubenswrapper[4897]: I0228 13:28:01.541783 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538088-4c9j2" event={"ID":"43e65966-94bd-4c6f-9e02-1d3f10577480","Type":"ContainerStarted","Data":"b7c2f1e8cdeabdd2acc7acfaf4c7595ffaa6a7290217c86b79ec632515c2af4a"} Feb 28 13:28:02 crc kubenswrapper[4897]: I0228 13:28:02.552960 4897 generic.go:334] "Generic (PLEG): container finished" podID="43e65966-94bd-4c6f-9e02-1d3f10577480" containerID="725b8a96bd051a1221ce1b763a307d804053924ba7541f1c192d338920f8a395" exitCode=0 Feb 28 13:28:02 crc kubenswrapper[4897]: I0228 13:28:02.553046 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538088-4c9j2" event={"ID":"43e65966-94bd-4c6f-9e02-1d3f10577480","Type":"ContainerDied","Data":"725b8a96bd051a1221ce1b763a307d804053924ba7541f1c192d338920f8a395"} Feb 28 13:28:03 crc kubenswrapper[4897]: I0228 13:28:03.866128 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538088-4c9j2" Feb 28 13:28:03 crc kubenswrapper[4897]: I0228 13:28:03.931100 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txd9d\" (UniqueName: \"kubernetes.io/projected/43e65966-94bd-4c6f-9e02-1d3f10577480-kube-api-access-txd9d\") pod \"43e65966-94bd-4c6f-9e02-1d3f10577480\" (UID: \"43e65966-94bd-4c6f-9e02-1d3f10577480\") " Feb 28 13:28:03 crc kubenswrapper[4897]: I0228 13:28:03.939585 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e65966-94bd-4c6f-9e02-1d3f10577480-kube-api-access-txd9d" (OuterVolumeSpecName: "kube-api-access-txd9d") pod "43e65966-94bd-4c6f-9e02-1d3f10577480" (UID: "43e65966-94bd-4c6f-9e02-1d3f10577480"). InnerVolumeSpecName "kube-api-access-txd9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:28:04 crc kubenswrapper[4897]: I0228 13:28:04.032703 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txd9d\" (UniqueName: \"kubernetes.io/projected/43e65966-94bd-4c6f-9e02-1d3f10577480-kube-api-access-txd9d\") on node \"crc\" DevicePath \"\"" Feb 28 13:28:04 crc kubenswrapper[4897]: I0228 13:28:04.567815 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538088-4c9j2" event={"ID":"43e65966-94bd-4c6f-9e02-1d3f10577480","Type":"ContainerDied","Data":"b7c2f1e8cdeabdd2acc7acfaf4c7595ffaa6a7290217c86b79ec632515c2af4a"} Feb 28 13:28:04 crc kubenswrapper[4897]: I0228 13:28:04.567860 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538088-4c9j2" Feb 28 13:28:04 crc kubenswrapper[4897]: I0228 13:28:04.567876 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c2f1e8cdeabdd2acc7acfaf4c7595ffaa6a7290217c86b79ec632515c2af4a" Feb 28 13:28:04 crc kubenswrapper[4897]: I0228 13:28:04.941146 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538082-lxmsk"] Feb 28 13:28:04 crc kubenswrapper[4897]: I0228 13:28:04.943555 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538082-lxmsk"] Feb 28 13:28:06 crc kubenswrapper[4897]: I0228 13:28:06.483472 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c2c910-1291-4a68-9fb2-85055cd61b9f" path="/var/lib/kubelet/pods/a3c2c910-1291-4a68-9fb2-85055cd61b9f/volumes" Feb 28 13:29:27 crc kubenswrapper[4897]: I0228 13:29:27.144548 4897 scope.go:117] "RemoveContainer" containerID="e596477865a8cbb823e491318c64006a0e6362865e601e66cf338c01e46f7613" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.136020 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f4grq"] Feb 28 13:29:32 crc kubenswrapper[4897]: E0228 13:29:32.136920 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e65966-94bd-4c6f-9e02-1d3f10577480" containerName="oc" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.136936 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e65966-94bd-4c6f-9e02-1d3f10577480" containerName="oc" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.137059 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e65966-94bd-4c6f-9e02-1d3f10577480" containerName="oc" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.137536 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f4grq" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.140085 4897 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pg46t" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.140232 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.152190 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f4grq"] Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.161644 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-ld6gk"] Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.162709 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ld6gk" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.170331 4897 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-txzkg" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.176012 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.186847 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5vvcp"] Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.189542 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.191648 4897 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dmqkr" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.194766 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ld6gk"] Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.202925 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5vvcp"] Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.220074 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sltzk\" (UniqueName: \"kubernetes.io/projected/80a798fa-b6e2-4063-95a5-56c55dec24b0-kube-api-access-sltzk\") pod \"cert-manager-webhook-687f57d79b-5vvcp\" (UID: \"80a798fa-b6e2-4063-95a5-56c55dec24b0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.220190 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwr8\" (UniqueName: \"kubernetes.io/projected/575a7b09-2bc9-458a-bdbc-169241a67869-kube-api-access-5vwr8\") pod \"cert-manager-858654f9db-ld6gk\" (UID: \"575a7b09-2bc9-458a-bdbc-169241a67869\") " pod="cert-manager/cert-manager-858654f9db-ld6gk" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.220224 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knszb\" (UniqueName: \"kubernetes.io/projected/b868e69f-c259-4f0e-9f12-7b0be2e26d03-kube-api-access-knszb\") pod \"cert-manager-cainjector-cf98fcc89-f4grq\" (UID: \"b868e69f-c259-4f0e-9f12-7b0be2e26d03\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f4grq" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.321905 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vwr8\" (UniqueName: \"kubernetes.io/projected/575a7b09-2bc9-458a-bdbc-169241a67869-kube-api-access-5vwr8\") pod \"cert-manager-858654f9db-ld6gk\" (UID: \"575a7b09-2bc9-458a-bdbc-169241a67869\") " pod="cert-manager/cert-manager-858654f9db-ld6gk" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.321968 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knszb\" (UniqueName: \"kubernetes.io/projected/b868e69f-c259-4f0e-9f12-7b0be2e26d03-kube-api-access-knszb\") pod \"cert-manager-cainjector-cf98fcc89-f4grq\" (UID: \"b868e69f-c259-4f0e-9f12-7b0be2e26d03\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f4grq" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.322010 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sltzk\" (UniqueName: \"kubernetes.io/projected/80a798fa-b6e2-4063-95a5-56c55dec24b0-kube-api-access-sltzk\") pod \"cert-manager-webhook-687f57d79b-5vvcp\" (UID: \"80a798fa-b6e2-4063-95a5-56c55dec24b0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.341247 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knszb\" (UniqueName: \"kubernetes.io/projected/b868e69f-c259-4f0e-9f12-7b0be2e26d03-kube-api-access-knszb\") pod \"cert-manager-cainjector-cf98fcc89-f4grq\" (UID: \"b868e69f-c259-4f0e-9f12-7b0be2e26d03\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f4grq" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.342272 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sltzk\" (UniqueName: \"kubernetes.io/projected/80a798fa-b6e2-4063-95a5-56c55dec24b0-kube-api-access-sltzk\") pod \"cert-manager-webhook-687f57d79b-5vvcp\" (UID: \"80a798fa-b6e2-4063-95a5-56c55dec24b0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.346745 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vwr8\" (UniqueName: \"kubernetes.io/projected/575a7b09-2bc9-458a-bdbc-169241a67869-kube-api-access-5vwr8\") pod \"cert-manager-858654f9db-ld6gk\" (UID: \"575a7b09-2bc9-458a-bdbc-169241a67869\") " pod="cert-manager/cert-manager-858654f9db-ld6gk" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.477468 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f4grq" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.485385 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ld6gk" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.502816 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.844340 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5vvcp"] Feb 28 13:29:32 crc kubenswrapper[4897]: I0228 13:29:32.968408 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ld6gk"] Feb 28 13:29:32 crc kubenswrapper[4897]: W0228 13:29:32.973034 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod575a7b09_2bc9_458a_bdbc_169241a67869.slice/crio-0f28ccb57acb5e7516cf567e5a3b91e9cf053ce72e899de9c72ac844fda5ac4f WatchSource:0}: Error finding container 0f28ccb57acb5e7516cf567e5a3b91e9cf053ce72e899de9c72ac844fda5ac4f: Status 404 returned error can't find the container with id 0f28ccb57acb5e7516cf567e5a3b91e9cf053ce72e899de9c72ac844fda5ac4f Feb 28 13:29:33 crc kubenswrapper[4897]: I0228 13:29:33.009945 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f4grq"] Feb 28 13:29:33 crc kubenswrapper[4897]: W0228 13:29:33.017929 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb868e69f_c259_4f0e_9f12_7b0be2e26d03.slice/crio-9a45dcbbe44b51ec819deb4ee0c1f14437b081b8d8f30461141f998830cac0d6 WatchSource:0}: Error finding container 9a45dcbbe44b51ec819deb4ee0c1f14437b081b8d8f30461141f998830cac0d6: Status 404 returned error can't find the container with id 9a45dcbbe44b51ec819deb4ee0c1f14437b081b8d8f30461141f998830cac0d6 Feb 28 13:29:33 crc kubenswrapper[4897]: I0228 13:29:33.209000 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ld6gk" event={"ID":"575a7b09-2bc9-458a-bdbc-169241a67869","Type":"ContainerStarted","Data":"0f28ccb57acb5e7516cf567e5a3b91e9cf053ce72e899de9c72ac844fda5ac4f"} Feb 28 13:29:33 crc kubenswrapper[4897]: I0228 13:29:33.210597 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" event={"ID":"80a798fa-b6e2-4063-95a5-56c55dec24b0","Type":"ContainerStarted","Data":"f6645714ff7f0003f4163f6458aed679208f598e5e01583fac62f35b163696a4"} Feb 28 13:29:33 crc kubenswrapper[4897]: I0228 13:29:33.212553 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f4grq" event={"ID":"b868e69f-c259-4f0e-9f12-7b0be2e26d03","Type":"ContainerStarted","Data":"9a45dcbbe44b51ec819deb4ee0c1f14437b081b8d8f30461141f998830cac0d6"} Feb 28 13:29:33 crc kubenswrapper[4897]: I0228 13:29:33.371541 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:29:33 crc kubenswrapper[4897]: I0228 13:29:33.371589 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:29:38 crc kubenswrapper[4897]: I0228 13:29:38.247765 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ld6gk" event={"ID":"575a7b09-2bc9-458a-bdbc-169241a67869","Type":"ContainerStarted","Data":"224a4f4cb95f9692b427719c81c897285baaeab19bd5b5ccbcc01eba44437dfc"} Feb 28 13:29:38 crc kubenswrapper[4897]: I0228 13:29:38.253788 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" event={"ID":"80a798fa-b6e2-4063-95a5-56c55dec24b0","Type":"ContainerStarted","Data":"dabf8b250c777df3d8db4fba3ac2f240dc5b5176e09c26d76501dc7961e36cf3"} Feb 28 13:29:38 crc kubenswrapper[4897]: I0228 13:29:38.253946 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" Feb 28 13:29:38 crc kubenswrapper[4897]: I0228 13:29:38.256136 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f4grq" event={"ID":"b868e69f-c259-4f0e-9f12-7b0be2e26d03","Type":"ContainerStarted","Data":"c5b8c11aaaa2bb9b96cc7f714937dd57ae3baf658991fc95c31e0fc1f1aaefab"} Feb 28 13:29:38 crc kubenswrapper[4897]: I0228 13:29:38.272578 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-ld6gk" podStartSLOduration=2.074160932 podStartE2EDuration="6.272547355s" podCreationTimestamp="2026-02-28 13:29:32 +0000 UTC" firstStartedPulling="2026-02-28 13:29:32.975114044 +0000 UTC m=+787.217434741" lastFinishedPulling="2026-02-28 13:29:37.173500507 +0000 UTC m=+791.415821164" observedRunningTime="2026-02-28 13:29:38.271749373 +0000 UTC m=+792.514070070" watchObservedRunningTime="2026-02-28 13:29:38.272547355 +0000 UTC m=+792.514868062" Feb 28 13:29:38 crc kubenswrapper[4897]: I0228 13:29:38.308062 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f4grq" podStartSLOduration=2.080708863 podStartE2EDuration="6.308026951s" podCreationTimestamp="2026-02-28 13:29:32 +0000 UTC" firstStartedPulling="2026-02-28 13:29:33.021756117 +0000 UTC m=+787.264076814" lastFinishedPulling="2026-02-28 13:29:37.249074215 +0000 UTC m=+791.491394902" observedRunningTime="2026-02-28 13:29:38.301988595 +0000 UTC m=+792.544309332" watchObservedRunningTime="2026-02-28 13:29:38.308026951 +0000 UTC m=+792.550347648" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.315507 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" podStartSLOduration=5.995028642 podStartE2EDuration="10.315455491s" podCreationTimestamp="2026-02-28 13:29:32 +0000 UTC" firstStartedPulling="2026-02-28 13:29:32.851934826 +0000 UTC m=+787.094255493" lastFinishedPulling="2026-02-28 13:29:37.172361685 +0000 UTC m=+791.414682342" observedRunningTime="2026-02-28 13:29:38.339233999 +0000 UTC m=+792.581554666" watchObservedRunningTime="2026-02-28 13:29:42.315455491 +0000 UTC m=+796.557776178" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.317077 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rjlcm"] Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.317763 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovn-controller" containerID="cri-o://b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256" gracePeriod=30 Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.317817 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="sbdb" containerID="cri-o://e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b" gracePeriod=30 Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.317855 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420" gracePeriod=30 Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.317925 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="nbdb" containerID="cri-o://2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf" gracePeriod=30 Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.318016 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kube-rbac-proxy-node" containerID="cri-o://417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71" gracePeriod=30 Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.318034 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="northd" containerID="cri-o://31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb" gracePeriod=30 Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.318042 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovn-acl-logging" containerID="cri-o://5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48" gracePeriod=30 Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.390645 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" containerID="cri-o://05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" gracePeriod=30 Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.505828 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-5vvcp" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.708920 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/3.log" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.713738 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovn-acl-logging/0.log" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.714548 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovn-controller/0.log" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.715476 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.788771 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-openvswitch\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.788864 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-node-log\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.788915 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-ovn-kubernetes\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.788969 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-ovn\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789040 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-etc-openvswitch\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789114 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-env-overrides\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789163 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-kubelet\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789231 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-systemd-units\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789278 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-systemd\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-bin\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789458 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-netns\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789532 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovn-node-metrics-cert\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789589 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-log-socket\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789647 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-var-lib-openvswitch\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789698 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789763 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-script-lib\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789806 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-slash\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789892 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbfw\" (UniqueName: \"kubernetes.io/projected/0e63af1c-1b83-44b6-9872-2dfefa37d433-kube-api-access-gwbfw\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.789959 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-config\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790010 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-netd\") pod \"0e63af1c-1b83-44b6-9872-2dfefa37d433\" (UID: \"0e63af1c-1b83-44b6-9872-2dfefa37d433\") " Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790475 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790534 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790566 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-node-log" (OuterVolumeSpecName: "node-log") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790545 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790600 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790603 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790622 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.790649 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791152 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791238 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-slash" (OuterVolumeSpecName: "host-slash") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791247 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791279 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-log-socket" (OuterVolumeSpecName: "log-socket") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791348 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791387 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791691 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791726 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.791850 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.792904 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h2jck"] Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793149 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kubecfg-setup" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793167 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kubecfg-setup" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793179 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793187 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793198 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793206 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793217 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="sbdb" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793225 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="sbdb" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793234 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="nbdb" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793241 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="nbdb" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793260 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793268 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793278 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovn-acl-logging" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793285 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovn-acl-logging" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793297 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kube-rbac-proxy-node" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793362 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kube-rbac-proxy-node" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793376 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="northd" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793385 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="northd" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793392 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793400 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793412 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovn-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793421 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovn-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793549 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793562 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="sbdb" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793580 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="northd" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793588 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovn-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793598 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovn-acl-logging" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793608 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793619 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="kube-rbac-proxy-node" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793626 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="nbdb" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793636 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793643 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793651 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793767 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793775 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: E0228 13:29:42.793784 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793791 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.793908 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerName="ovnkube-controller" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.799270 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.799460 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e63af1c-1b83-44b6-9872-2dfefa37d433-kube-api-access-gwbfw" (OuterVolumeSpecName: "kube-api-access-gwbfw") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "kube-api-access-gwbfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.808280 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.816672 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0e63af1c-1b83-44b6-9872-2dfefa37d433" (UID: "0e63af1c-1b83-44b6-9872-2dfefa37d433"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.892609 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.892696 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-ovnkube-script-lib\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.892750 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-node-log\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.892807 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-cni-bin\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.892851 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-run-ovn-kubernetes\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.892887 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-run-netns\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893000 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-log-socket\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893060 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-slash\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893229 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-env-overrides\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893290 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-cni-netd\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893395 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsqm\" (UniqueName: \"kubernetes.io/projected/929fc231-393a-4a5e-ab08-1777c6879256-kube-api-access-5qsqm\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893423 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-systemd-units\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893483 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/929fc231-393a-4a5e-ab08-1777c6879256-ovn-node-metrics-cert\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893680 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893796 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-ovnkube-config\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893879 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-ovn\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.893957 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-systemd\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894019 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-etc-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894060 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-var-lib-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894116 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-kubelet\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894273 4897 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894301 4897 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894350 4897 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894369 4897 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894386 4897 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894407 4897 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894424 4897 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894446 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894471 4897 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-log-socket\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894489 4897 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894509 4897 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894528 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894548 4897 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-slash\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894568 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbfw\" (UniqueName: \"kubernetes.io/projected/0e63af1c-1b83-44b6-9872-2dfefa37d433-kube-api-access-gwbfw\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894586 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e63af1c-1b83-44b6-9872-2dfefa37d433-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894604 4897 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894621 4897 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894639 4897 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-node-log\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894658 4897 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.894676 4897 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e63af1c-1b83-44b6-9872-2dfefa37d433-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996104 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-var-lib-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996234 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-kubelet\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996281 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996375 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-ovnkube-script-lib\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996366 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-var-lib-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996422 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-node-log\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996430 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-kubelet\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996490 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-node-log\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996508 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-cni-bin\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-run-ovn-kubernetes\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996600 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-run-netns\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996654 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-log-socket\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996690 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-slash\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996661 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996760 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-cni-bin\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996702 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-run-ovn-kubernetes\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996827 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-slash\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996803 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-run-netns\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996730 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-env-overrides\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996974 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-cni-netd\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.996744 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-log-socket\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997048 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qsqm\" (UniqueName: \"kubernetes.io/projected/929fc231-393a-4a5e-ab08-1777c6879256-kube-api-access-5qsqm\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997080 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-host-cni-netd\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997085 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-systemd-units\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997140 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-systemd-units\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997169 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/929fc231-393a-4a5e-ab08-1777c6879256-ovn-node-metrics-cert\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997214 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997252 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-ovnkube-config\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997337 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-ovn\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997392 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-systemd\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997453 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-etc-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.997557 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-etc-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.998089 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-ovnkube-script-lib\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.998177 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-ovn\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.998228 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-systemd\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.998276 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/929fc231-393a-4a5e-ab08-1777c6879256-run-openvswitch\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.998843 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-env-overrides\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:42 crc kubenswrapper[4897]: I0228 13:29:42.998867 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/929fc231-393a-4a5e-ab08-1777c6879256-ovnkube-config\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.002405 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/929fc231-393a-4a5e-ab08-1777c6879256-ovn-node-metrics-cert\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.022234 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qsqm\" (UniqueName: \"kubernetes.io/projected/929fc231-393a-4a5e-ab08-1777c6879256-kube-api-access-5qsqm\") pod \"ovnkube-node-h2jck\" (UID: \"929fc231-393a-4a5e-ab08-1777c6879256\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.143898 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.298836 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovnkube-controller/3.log" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.305372 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovn-acl-logging/0.log" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306133 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rjlcm_0e63af1c-1b83-44b6-9872-2dfefa37d433/ovn-controller/0.log" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306757 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" exitCode=0 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306792 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b" exitCode=0 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306807 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf" exitCode=0 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306820 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb" exitCode=0 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306831 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420" exitCode=0 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306842 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71" exitCode=0 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306855 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48" exitCode=143 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306870 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e63af1c-1b83-44b6-9872-2dfefa37d433" containerID="b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256" exitCode=143 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306864 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306920 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306934 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306954 4897 scope.go:117] "RemoveContainer" containerID="05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.306942 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307081 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307106 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307127 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307144 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307159 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307170 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307180 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307190 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307536 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307668 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307685 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307706 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307747 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307796 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307809 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307816 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307824 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307830 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307837 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307844 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307853 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307860 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307866 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307879 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307946 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307955 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307961 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307967 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307973 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307979 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307984 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307990 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.307996 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308002 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308014 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjlcm" event={"ID":"0e63af1c-1b83-44b6-9872-2dfefa37d433","Type":"ContainerDied","Data":"b069924cc749e31828179ef715e6bebc810118832df5c22416be266834d1b77c"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308027 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308035 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308041 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308047 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308053 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308059 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308066 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308073 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308080 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.308085 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.314667 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/2.log" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.315406 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/1.log" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.315458 4897 generic.go:334] "Generic (PLEG): container finished" podID="cd164967-b99b-47d0-a691-7d8118fa81ce" containerID="3f09bce6157f789ce56ef9ba541b09f9f3f4564b8294d903a1065eaee6b33c56" exitCode=2 Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.315534 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k4m7f" event={"ID":"cd164967-b99b-47d0-a691-7d8118fa81ce","Type":"ContainerDied","Data":"3f09bce6157f789ce56ef9ba541b09f9f3f4564b8294d903a1065eaee6b33c56"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.315563 4897 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.316182 4897 scope.go:117] "RemoveContainer" containerID="3f09bce6157f789ce56ef9ba541b09f9f3f4564b8294d903a1065eaee6b33c56" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.316422 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-k4m7f_openshift-multus(cd164967-b99b-47d0-a691-7d8118fa81ce)\"" pod="openshift-multus/multus-k4m7f" podUID="cd164967-b99b-47d0-a691-7d8118fa81ce" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.317619 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"f99149765fb30097f4963d223baa90891e3fc59e1a337e9d5511ffff505ff23a"} Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.340692 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.364879 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rjlcm"] Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.388059 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rjlcm"] Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.404512 4897 scope.go:117] "RemoveContainer" containerID="e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.417349 4897 scope.go:117] "RemoveContainer" containerID="2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.433415 4897 scope.go:117] "RemoveContainer" containerID="31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.448767 4897 scope.go:117] "RemoveContainer" containerID="e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.551479 4897 scope.go:117] "RemoveContainer" containerID="417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.568753 4897 scope.go:117] "RemoveContainer" containerID="5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.584211 4897 scope.go:117] "RemoveContainer" containerID="b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.598889 4897 scope.go:117] "RemoveContainer" containerID="1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.616346 4897 scope.go:117] "RemoveContainer" containerID="05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.618081 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": container with ID starting with 05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de not found: ID does not exist" containerID="05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.618150 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} err="failed to get container status \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": rpc error: code = NotFound desc = could not find container \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": container with ID starting with 05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.618216 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.618730 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": container with ID starting with bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa not found: ID does not exist" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.618794 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} err="failed to get container status \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": rpc error: code = NotFound desc = could not find container \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": container with ID starting with bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.618840 4897 scope.go:117] "RemoveContainer" containerID="e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.619212 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": container with ID starting with e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b not found: ID does not exist" containerID="e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.619297 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} err="failed to get container status \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": rpc error: code = NotFound desc = could not find container \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": container with ID starting with e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.619357 4897 scope.go:117] "RemoveContainer" containerID="2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.619647 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": container with ID starting with 2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf not found: ID does not exist" containerID="2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.619685 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} err="failed to get container status \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": rpc error: code = NotFound desc = could not find container \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": container with ID starting with 2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.619710 4897 scope.go:117] "RemoveContainer" containerID="31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.619978 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": container with ID starting with 31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb not found: ID does not exist" containerID="31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.620015 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} err="failed to get container status \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": rpc error: code = NotFound desc = could not find container \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": container with ID starting with 31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.620040 4897 scope.go:117] "RemoveContainer" containerID="e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.620281 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": container with ID starting with e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420 not found: ID does not exist" containerID="e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.620356 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} err="failed to get container status \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": rpc error: code = NotFound desc = could not find container \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": container with ID starting with e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.620382 4897 scope.go:117] "RemoveContainer" containerID="417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.620620 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": container with ID starting with 417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71 not found: ID does not exist" containerID="417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.620658 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} err="failed to get container status \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": rpc error: code = NotFound desc = could not find container \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": container with ID starting with 417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.620684 4897 scope.go:117] "RemoveContainer" containerID="5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.620913 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": container with ID starting with 5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48 not found: ID does not exist" containerID="5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.620950 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} err="failed to get container status \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": rpc error: code = NotFound desc = could not find container \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": container with ID starting with 5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.620977 4897 scope.go:117] "RemoveContainer" containerID="b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.621238 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": container with ID starting with b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256 not found: ID does not exist" containerID="b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.621277 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} err="failed to get container status \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": rpc error: code = NotFound desc = could not find container \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": container with ID starting with b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.621304 4897 scope.go:117] "RemoveContainer" containerID="1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500" Feb 28 13:29:43 crc kubenswrapper[4897]: E0228 13:29:43.621587 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": container with ID starting with 1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500 not found: ID does not exist" containerID="1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.621625 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} err="failed to get container status \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": rpc error: code = NotFound desc = could not find container \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": container with ID starting with 1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.621650 4897 scope.go:117] "RemoveContainer" containerID="05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.621873 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} err="failed to get container status \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": rpc error: code = NotFound desc = could not find container \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": container with ID starting with 05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.621907 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.622136 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} err="failed to get container status \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": rpc error: code = NotFound desc = could not find container \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": container with ID starting with bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.622168 4897 scope.go:117] "RemoveContainer" containerID="e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.622693 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} err="failed to get container status \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": rpc error: code = NotFound desc = could not find container \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": container with ID starting with e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.622735 4897 scope.go:117] "RemoveContainer" containerID="2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.622979 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} err="failed to get container status \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": rpc error: code = NotFound desc = could not find container \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": container with ID starting with 2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.623016 4897 scope.go:117] "RemoveContainer" containerID="31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.623282 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} err="failed to get container status \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": rpc error: code = NotFound desc = could not find container \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": container with ID starting with 31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.623326 4897 scope.go:117] "RemoveContainer" containerID="e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.623672 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} err="failed to get container status \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": rpc error: code = NotFound desc = could not find container \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": container with ID starting with e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.623714 4897 scope.go:117] "RemoveContainer" containerID="417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.624009 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} err="failed to get container status \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": rpc error: code = NotFound desc = could not find container \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": container with ID starting with 417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.624046 4897 scope.go:117] "RemoveContainer" containerID="5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.624330 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} err="failed to get container status \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": rpc error: code = NotFound desc = could not find container \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": container with ID starting with 5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.624365 4897 scope.go:117] "RemoveContainer" containerID="b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.624622 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} err="failed to get container status \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": rpc error: code = NotFound desc = could not find container \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": container with ID starting with b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.624657 4897 scope.go:117] "RemoveContainer" containerID="1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.624890 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} err="failed to get container status \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": rpc error: code = NotFound desc = could not find container \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": container with ID starting with 1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.624921 4897 scope.go:117] "RemoveContainer" containerID="05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.625410 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} err="failed to get container status \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": rpc error: code = NotFound desc = could not find container \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": container with ID starting with 05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.625445 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.625732 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} err="failed to get container status \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": rpc error: code = NotFound desc = could not find container \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": container with ID starting with bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.625764 4897 scope.go:117] "RemoveContainer" containerID="e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.626024 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} err="failed to get container status \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": rpc error: code = NotFound desc = could not find container \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": container with ID starting with e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.626061 4897 scope.go:117] "RemoveContainer" containerID="2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.626381 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} err="failed to get container status \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": rpc error: code = NotFound desc = could not find container \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": container with ID starting with 2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.626412 4897 scope.go:117] "RemoveContainer" containerID="31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.626805 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} err="failed to get container status \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": rpc error: code = NotFound desc = could not find container \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": container with ID starting with 31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.626842 4897 scope.go:117] "RemoveContainer" containerID="e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.627170 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} err="failed to get container status \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": rpc error: code = NotFound desc = could not find container \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": container with ID starting with e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.627204 4897 scope.go:117] "RemoveContainer" containerID="417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.627531 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} err="failed to get container status \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": rpc error: code = NotFound desc = could not find container \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": container with ID starting with 417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.627564 4897 scope.go:117] "RemoveContainer" containerID="5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.627823 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} err="failed to get container status \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": rpc error: code = NotFound desc = could not find container \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": container with ID starting with 5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.627855 4897 scope.go:117] "RemoveContainer" containerID="b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.628121 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} err="failed to get container status \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": rpc error: code = NotFound desc = could not find container \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": container with ID starting with b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.628154 4897 scope.go:117] "RemoveContainer" containerID="1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.628549 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} err="failed to get container status \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": rpc error: code = NotFound desc = could not find container \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": container with ID starting with 1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.628583 4897 scope.go:117] "RemoveContainer" containerID="05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.628818 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} err="failed to get container status \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": rpc error: code = NotFound desc = could not find container \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": container with ID starting with 05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.628849 4897 scope.go:117] "RemoveContainer" containerID="bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.629133 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa"} err="failed to get container status \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": rpc error: code = NotFound desc = could not find container \"bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa\": container with ID starting with bdacd5bb283ff1cf90453e5aa35445a1410cc68736a2ec6fd4749907707dbbaa not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.629165 4897 scope.go:117] "RemoveContainer" containerID="e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.629475 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b"} err="failed to get container status \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": rpc error: code = NotFound desc = could not find container \"e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b\": container with ID starting with e8db7fc4d5e2adc2b67a3d666808ecbb4b6aaf3560aeab9df669d0fdb6135a9b not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.629514 4897 scope.go:117] "RemoveContainer" containerID="2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.629757 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf"} err="failed to get container status \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": rpc error: code = NotFound desc = could not find container \"2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf\": container with ID starting with 2f4649992ec762c265fe42bec4a57389f0c6d2e122d6ec2186353323771e8caf not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.629788 4897 scope.go:117] "RemoveContainer" containerID="31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.630040 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb"} err="failed to get container status \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": rpc error: code = NotFound desc = could not find container \"31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb\": container with ID starting with 31551f5de648c9b6cffd3b632d2d8a46270197d2095291a6d0d9458eafa19ebb not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.630076 4897 scope.go:117] "RemoveContainer" containerID="e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.630378 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420"} err="failed to get container status \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": rpc error: code = NotFound desc = could not find container \"e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420\": container with ID starting with e9be0f4e45fdde5415d0cbb95f43d014119dd4f18ce419e2ca9e2d4ff7f43420 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.630415 4897 scope.go:117] "RemoveContainer" containerID="417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.630728 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71"} err="failed to get container status \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": rpc error: code = NotFound desc = could not find container \"417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71\": container with ID starting with 417104a376f3736eecbdf2048505e695409a555b588217fa82dbaff08e7b0d71 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.630763 4897 scope.go:117] "RemoveContainer" containerID="5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.631041 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48"} err="failed to get container status \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": rpc error: code = NotFound desc = could not find container \"5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48\": container with ID starting with 5df25c4e01d03882faba90cec7325e10f9372bfeaa738bc56ba5d67eaaac4b48 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.631073 4897 scope.go:117] "RemoveContainer" containerID="b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.631401 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256"} err="failed to get container status \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": rpc error: code = NotFound desc = could not find container \"b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256\": container with ID starting with b945d0bbb152b70dd6734be125c1206ae031148f41cd581dbaef2ba603678256 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.631451 4897 scope.go:117] "RemoveContainer" containerID="1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.631712 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500"} err="failed to get container status \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": rpc error: code = NotFound desc = could not find container \"1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500\": container with ID starting with 1e68e85e2bcb3664fbbc29448631f64ec873ad39c36ca770750c3123e7f15500 not found: ID does not exist" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.631747 4897 scope.go:117] "RemoveContainer" containerID="05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de" Feb 28 13:29:43 crc kubenswrapper[4897]: I0228 13:29:43.632004 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de"} err="failed to get container status \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": rpc error: code = NotFound desc = could not find container \"05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de\": container with ID starting with 05719f06786b6a9df5858de554fd63fbabbe5dd22bc9d5dafc4bacd47f0eb0de not found: ID does not exist" Feb 28 13:29:44 crc kubenswrapper[4897]: I0228 13:29:44.327588 4897 generic.go:334] "Generic (PLEG): container finished" podID="929fc231-393a-4a5e-ab08-1777c6879256" containerID="19418fd18b464dd46ebd2afbc2c8f193611eff7db69a841b86f8eb35c3817f2f" exitCode=0 Feb 28 13:29:44 crc kubenswrapper[4897]: I0228 13:29:44.327636 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerDied","Data":"19418fd18b464dd46ebd2afbc2c8f193611eff7db69a841b86f8eb35c3817f2f"} Feb 28 13:29:44 crc kubenswrapper[4897]: I0228 13:29:44.475208 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e63af1c-1b83-44b6-9872-2dfefa37d433" path="/var/lib/kubelet/pods/0e63af1c-1b83-44b6-9872-2dfefa37d433/volumes" Feb 28 13:29:45 crc kubenswrapper[4897]: I0228 13:29:45.339424 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"5bc75edeb54ff5f45b689f84e758b89fa799297f6a9c56c627b5b501853c7f0c"} Feb 28 13:29:45 crc kubenswrapper[4897]: I0228 13:29:45.339479 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"c30cc2b1dbcc9149731a59aa8cb57fed2962a5f63c77a50f84de63d04e77b0a4"} Feb 28 13:29:45 crc kubenswrapper[4897]: I0228 13:29:45.339493 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"6598ad6e69b6a7ef774bcbd47dd5d4d96284e8e9c36afe83a562f47c73a8f01c"} Feb 28 13:29:45 crc kubenswrapper[4897]: I0228 13:29:45.339507 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"1e5f8cda7d2f5d08b6db728f45725a5f861078307421ee1f316abbea551f0761"} Feb 28 13:29:45 crc kubenswrapper[4897]: I0228 13:29:45.339518 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"647982b188bfc83d79554ca30a8272bff87fd4a47fab914ab6429ed1c576cb8e"} Feb 28 13:29:45 crc kubenswrapper[4897]: I0228 13:29:45.339530 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"11aa5953411ec6f1c5be40eb671375c628201d3de604a2ccbf0be26044c853a6"} Feb 28 13:29:48 crc kubenswrapper[4897]: I0228 13:29:48.359394 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"a219ec016f2a22c1a9d7b405618eb3c558e4bcf16c1cfe3d5e8d6928d3dc4f77"} Feb 28 13:29:50 crc kubenswrapper[4897]: I0228 13:29:50.375540 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" event={"ID":"929fc231-393a-4a5e-ab08-1777c6879256","Type":"ContainerStarted","Data":"17111762cf5ab8e9f4fd5790ba39edcd9acaf1c10142d55cd5b78d9599fc35dc"} Feb 28 13:29:50 crc kubenswrapper[4897]: I0228 13:29:50.376941 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:50 crc kubenswrapper[4897]: I0228 13:29:50.377021 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:50 crc kubenswrapper[4897]: I0228 13:29:50.377077 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:50 crc kubenswrapper[4897]: I0228 13:29:50.409189 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:50 crc kubenswrapper[4897]: I0228 13:29:50.411482 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:29:50 crc kubenswrapper[4897]: I0228 13:29:50.427067 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" podStartSLOduration=8.427029382 podStartE2EDuration="8.427029382s" podCreationTimestamp="2026-02-28 13:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:29:50.411800964 +0000 UTC m=+804.654121651" watchObservedRunningTime="2026-02-28 13:29:50.427029382 +0000 UTC m=+804.669350059" Feb 28 13:29:58 crc kubenswrapper[4897]: I0228 13:29:58.457400 4897 scope.go:117] "RemoveContainer" containerID="3f09bce6157f789ce56ef9ba541b09f9f3f4564b8294d903a1065eaee6b33c56" Feb 28 13:29:58 crc kubenswrapper[4897]: E0228 13:29:58.458292 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-k4m7f_openshift-multus(cd164967-b99b-47d0-a691-7d8118fa81ce)\"" pod="openshift-multus/multus-k4m7f" podUID="cd164967-b99b-47d0-a691-7d8118fa81ce" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.153041 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538090-677fq"] Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.154494 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.159193 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.159264 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.159961 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.169130 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538090-677fq"] Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.234707 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjbh\" (UniqueName: \"kubernetes.io/projected/292167e9-1fa2-4fda-b4da-f112d69333b9-kube-api-access-vtjbh\") pod \"auto-csr-approver-29538090-677fq\" (UID: \"292167e9-1fa2-4fda-b4da-f112d69333b9\") " pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.237851 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv"] Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.238492 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.242336 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.242346 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.246904 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv"] Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.336514 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f02d7df-23c0-449f-91a8-29e7e2ee7775-secret-volume\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.336658 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f02d7df-23c0-449f-91a8-29e7e2ee7775-config-volume\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.336688 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvmv\" (UniqueName: \"kubernetes.io/projected/4f02d7df-23c0-449f-91a8-29e7e2ee7775-kube-api-access-fbvmv\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.337018 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjbh\" (UniqueName: \"kubernetes.io/projected/292167e9-1fa2-4fda-b4da-f112d69333b9-kube-api-access-vtjbh\") pod \"auto-csr-approver-29538090-677fq\" (UID: \"292167e9-1fa2-4fda-b4da-f112d69333b9\") " pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.362268 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjbh\" (UniqueName: \"kubernetes.io/projected/292167e9-1fa2-4fda-b4da-f112d69333b9-kube-api-access-vtjbh\") pod \"auto-csr-approver-29538090-677fq\" (UID: \"292167e9-1fa2-4fda-b4da-f112d69333b9\") " pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.438814 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f02d7df-23c0-449f-91a8-29e7e2ee7775-secret-volume\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.438939 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f02d7df-23c0-449f-91a8-29e7e2ee7775-config-volume\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.439015 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbvmv\" (UniqueName: \"kubernetes.io/projected/4f02d7df-23c0-449f-91a8-29e7e2ee7775-kube-api-access-fbvmv\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.440900 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f02d7df-23c0-449f-91a8-29e7e2ee7775-config-volume\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.444713 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f02d7df-23c0-449f-91a8-29e7e2ee7775-secret-volume\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.474349 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbvmv\" (UniqueName: \"kubernetes.io/projected/4f02d7df-23c0-449f-91a8-29e7e2ee7775-kube-api-access-fbvmv\") pod \"collect-profiles-29538090-d5qhv\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.476867 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:00 crc kubenswrapper[4897]: E0228 13:30:00.508560 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29538090-677fq_openshift-infra_292167e9-1fa2-4fda-b4da-f112d69333b9_0(c30a18adbf186c0eb61a98da99c775f53cced44f648c2b4f825395879d7555d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:30:00 crc kubenswrapper[4897]: E0228 13:30:00.508855 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29538090-677fq_openshift-infra_292167e9-1fa2-4fda-b4da-f112d69333b9_0(c30a18adbf186c0eb61a98da99c775f53cced44f648c2b4f825395879d7555d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:00 crc kubenswrapper[4897]: E0228 13:30:00.509042 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29538090-677fq_openshift-infra_292167e9-1fa2-4fda-b4da-f112d69333b9_0(c30a18adbf186c0eb61a98da99c775f53cced44f648c2b4f825395879d7555d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:00 crc kubenswrapper[4897]: E0228 13:30:00.509260 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29538090-677fq_openshift-infra(292167e9-1fa2-4fda-b4da-f112d69333b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29538090-677fq_openshift-infra(292167e9-1fa2-4fda-b4da-f112d69333b9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29538090-677fq_openshift-infra_292167e9-1fa2-4fda-b4da-f112d69333b9_0(c30a18adbf186c0eb61a98da99c775f53cced44f648c2b4f825395879d7555d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29538090-677fq" podUID="292167e9-1fa2-4fda-b4da-f112d69333b9" Feb 28 13:30:00 crc kubenswrapper[4897]: I0228 13:30:00.554124 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: E0228 13:30:00.590876 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager_4f02d7df-23c0-449f-91a8-29e7e2ee7775_0(3cc15c7bd295d667e21344e117b33e3406686efa379f7e474269ca42c4fa17cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:30:00 crc kubenswrapper[4897]: E0228 13:30:00.590969 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager_4f02d7df-23c0-449f-91a8-29e7e2ee7775_0(3cc15c7bd295d667e21344e117b33e3406686efa379f7e474269ca42c4fa17cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: E0228 13:30:00.591012 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager_4f02d7df-23c0-449f-91a8-29e7e2ee7775_0(3cc15c7bd295d667e21344e117b33e3406686efa379f7e474269ca42c4fa17cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:00 crc kubenswrapper[4897]: E0228 13:30:00.591110 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager(4f02d7df-23c0-449f-91a8-29e7e2ee7775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager(4f02d7df-23c0-449f-91a8-29e7e2ee7775)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager_4f02d7df-23c0-449f-91a8-29e7e2ee7775_0(3cc15c7bd295d667e21344e117b33e3406686efa379f7e474269ca42c4fa17cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" podUID="4f02d7df-23c0-449f-91a8-29e7e2ee7775" Feb 28 13:30:01 crc kubenswrapper[4897]: I0228 13:30:01.450339 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:01 crc kubenswrapper[4897]: I0228 13:30:01.450387 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:01 crc kubenswrapper[4897]: I0228 13:30:01.450890 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:01 crc kubenswrapper[4897]: I0228 13:30:01.451272 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:01 crc kubenswrapper[4897]: E0228 13:30:01.493370 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager_4f02d7df-23c0-449f-91a8-29e7e2ee7775_0(d4e9ad50ddab5397273cec32bf4478f5597f7fe565d2070e63e07d9f8a9ce948): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:30:01 crc kubenswrapper[4897]: E0228 13:30:01.493448 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager_4f02d7df-23c0-449f-91a8-29e7e2ee7775_0(d4e9ad50ddab5397273cec32bf4478f5597f7fe565d2070e63e07d9f8a9ce948): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:01 crc kubenswrapper[4897]: E0228 13:30:01.493483 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager_4f02d7df-23c0-449f-91a8-29e7e2ee7775_0(d4e9ad50ddab5397273cec32bf4478f5597f7fe565d2070e63e07d9f8a9ce948): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:01 crc kubenswrapper[4897]: E0228 13:30:01.493553 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager(4f02d7df-23c0-449f-91a8-29e7e2ee7775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager(4f02d7df-23c0-449f-91a8-29e7e2ee7775)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29538090-d5qhv_openshift-operator-lifecycle-manager_4f02d7df-23c0-449f-91a8-29e7e2ee7775_0(d4e9ad50ddab5397273cec32bf4478f5597f7fe565d2070e63e07d9f8a9ce948): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" podUID="4f02d7df-23c0-449f-91a8-29e7e2ee7775" Feb 28 13:30:01 crc kubenswrapper[4897]: E0228 13:30:01.507633 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29538090-677fq_openshift-infra_292167e9-1fa2-4fda-b4da-f112d69333b9_0(6634cdb695055cc2274f8a9e513ffb1c88d58015f47f4eeeff6fab57b61e8419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:30:01 crc kubenswrapper[4897]: E0228 13:30:01.507693 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29538090-677fq_openshift-infra_292167e9-1fa2-4fda-b4da-f112d69333b9_0(6634cdb695055cc2274f8a9e513ffb1c88d58015f47f4eeeff6fab57b61e8419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:01 crc kubenswrapper[4897]: E0228 13:30:01.507718 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29538090-677fq_openshift-infra_292167e9-1fa2-4fda-b4da-f112d69333b9_0(6634cdb695055cc2274f8a9e513ffb1c88d58015f47f4eeeff6fab57b61e8419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:01 crc kubenswrapper[4897]: E0228 13:30:01.507774 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29538090-677fq_openshift-infra(292167e9-1fa2-4fda-b4da-f112d69333b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29538090-677fq_openshift-infra(292167e9-1fa2-4fda-b4da-f112d69333b9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29538090-677fq_openshift-infra_292167e9-1fa2-4fda-b4da-f112d69333b9_0(6634cdb695055cc2274f8a9e513ffb1c88d58015f47f4eeeff6fab57b61e8419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29538090-677fq" podUID="292167e9-1fa2-4fda-b4da-f112d69333b9" Feb 28 13:30:03 crc kubenswrapper[4897]: I0228 13:30:03.371545 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:30:03 crc kubenswrapper[4897]: I0228 13:30:03.371939 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.824566 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf"] Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.826809 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.829841 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.839955 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf"] Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.891017 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tv2p\" (UniqueName: \"kubernetes.io/projected/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-kube-api-access-4tv2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.891079 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.891246 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.992935 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tv2p\" (UniqueName: \"kubernetes.io/projected/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-kube-api-access-4tv2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.993019 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.993102 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.993942 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:10 crc kubenswrapper[4897]: I0228 13:30:10.994046 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: I0228 13:30:11.013694 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tv2p\" (UniqueName: \"kubernetes.io/projected/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-kube-api-access-4tv2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: I0228 13:30:11.152282 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: E0228 13:30:11.190520 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace_10011d40-3da8-4a8f-b650-17d5bcbd7f8a_0(11062bdc97d325ae05585d3d4af6dd52fe60a27fe33966f016b7c0d300dc3561): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:30:11 crc kubenswrapper[4897]: E0228 13:30:11.190603 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace_10011d40-3da8-4a8f-b650-17d5bcbd7f8a_0(11062bdc97d325ae05585d3d4af6dd52fe60a27fe33966f016b7c0d300dc3561): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: E0228 13:30:11.190647 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace_10011d40-3da8-4a8f-b650-17d5bcbd7f8a_0(11062bdc97d325ae05585d3d4af6dd52fe60a27fe33966f016b7c0d300dc3561): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: E0228 13:30:11.190761 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace(10011d40-3da8-4a8f-b650-17d5bcbd7f8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace(10011d40-3da8-4a8f-b650-17d5bcbd7f8a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace_10011d40-3da8-4a8f-b650-17d5bcbd7f8a_0(11062bdc97d325ae05585d3d4af6dd52fe60a27fe33966f016b7c0d300dc3561): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" Feb 28 13:30:11 crc kubenswrapper[4897]: I0228 13:30:11.456597 4897 scope.go:117] "RemoveContainer" containerID="3f09bce6157f789ce56ef9ba541b09f9f3f4564b8294d903a1065eaee6b33c56" Feb 28 13:30:11 crc kubenswrapper[4897]: I0228 13:30:11.519949 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: I0228 13:30:11.520453 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: E0228 13:30:11.544605 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace_10011d40-3da8-4a8f-b650-17d5bcbd7f8a_0(b460a7dfe2ed455fc3b932d0674cbab8280ed956963be75b30d71c8cb054f31f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 13:30:11 crc kubenswrapper[4897]: E0228 13:30:11.544692 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace_10011d40-3da8-4a8f-b650-17d5bcbd7f8a_0(b460a7dfe2ed455fc3b932d0674cbab8280ed956963be75b30d71c8cb054f31f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: E0228 13:30:11.544738 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace_10011d40-3da8-4a8f-b650-17d5bcbd7f8a_0(b460a7dfe2ed455fc3b932d0674cbab8280ed956963be75b30d71c8cb054f31f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:11 crc kubenswrapper[4897]: E0228 13:30:11.544811 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace(10011d40-3da8-4a8f-b650-17d5bcbd7f8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace(10011d40-3da8-4a8f-b650-17d5bcbd7f8a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_openshift-marketplace_10011d40-3da8-4a8f-b650-17d5bcbd7f8a_0(b460a7dfe2ed455fc3b932d0674cbab8280ed956963be75b30d71c8cb054f31f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" Feb 28 13:30:12 crc kubenswrapper[4897]: I0228 13:30:12.529562 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/2.log" Feb 28 13:30:12 crc kubenswrapper[4897]: I0228 13:30:12.532257 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/1.log" Feb 28 13:30:12 crc kubenswrapper[4897]: I0228 13:30:12.532366 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k4m7f" event={"ID":"cd164967-b99b-47d0-a691-7d8118fa81ce","Type":"ContainerStarted","Data":"55e53e37efddcb71f118761879549376cc0ba4f7c31c60f20332735277b57649"} Feb 28 13:30:13 crc kubenswrapper[4897]: I0228 13:30:13.176062 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h2jck" Feb 28 13:30:14 crc kubenswrapper[4897]: I0228 13:30:14.456007 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:14 crc kubenswrapper[4897]: I0228 13:30:14.457205 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:14 crc kubenswrapper[4897]: I0228 13:30:14.657556 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv"] Feb 28 13:30:14 crc kubenswrapper[4897]: W0228 13:30:14.660354 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f02d7df_23c0_449f_91a8_29e7e2ee7775.slice/crio-2f2ec1f2b6643d45a46885d7917b96ca04f3025855437de886ee123414aa05cd WatchSource:0}: Error finding container 2f2ec1f2b6643d45a46885d7917b96ca04f3025855437de886ee123414aa05cd: Status 404 returned error can't find the container with id 2f2ec1f2b6643d45a46885d7917b96ca04f3025855437de886ee123414aa05cd Feb 28 13:30:15 crc kubenswrapper[4897]: I0228 13:30:15.554549 4897 generic.go:334] "Generic (PLEG): container finished" podID="4f02d7df-23c0-449f-91a8-29e7e2ee7775" containerID="7a7c3911f2c74a15fcc8d8ab2a06e00ae0633ffeb5d89b6a7e29def3057aac4c" exitCode=0 Feb 28 13:30:15 crc kubenswrapper[4897]: I0228 13:30:15.554641 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" event={"ID":"4f02d7df-23c0-449f-91a8-29e7e2ee7775","Type":"ContainerDied","Data":"7a7c3911f2c74a15fcc8d8ab2a06e00ae0633ffeb5d89b6a7e29def3057aac4c"} Feb 28 13:30:15 crc kubenswrapper[4897]: I0228 13:30:15.555101 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" event={"ID":"4f02d7df-23c0-449f-91a8-29e7e2ee7775","Type":"ContainerStarted","Data":"2f2ec1f2b6643d45a46885d7917b96ca04f3025855437de886ee123414aa05cd"} Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.456039 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.463108 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.855798 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.881179 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538090-677fq"] Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.985904 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f02d7df-23c0-449f-91a8-29e7e2ee7775-config-volume\") pod \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.986408 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f02d7df-23c0-449f-91a8-29e7e2ee7775-secret-volume\") pod \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.986474 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbvmv\" (UniqueName: \"kubernetes.io/projected/4f02d7df-23c0-449f-91a8-29e7e2ee7775-kube-api-access-fbvmv\") pod \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\" (UID: \"4f02d7df-23c0-449f-91a8-29e7e2ee7775\") " Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.986741 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f02d7df-23c0-449f-91a8-29e7e2ee7775-config-volume" (OuterVolumeSpecName: "config-volume") pod "4f02d7df-23c0-449f-91a8-29e7e2ee7775" (UID: "4f02d7df-23c0-449f-91a8-29e7e2ee7775"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.986906 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f02d7df-23c0-449f-91a8-29e7e2ee7775-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.994066 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f02d7df-23c0-449f-91a8-29e7e2ee7775-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4f02d7df-23c0-449f-91a8-29e7e2ee7775" (UID: "4f02d7df-23c0-449f-91a8-29e7e2ee7775"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:30:16 crc kubenswrapper[4897]: I0228 13:30:16.994203 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f02d7df-23c0-449f-91a8-29e7e2ee7775-kube-api-access-fbvmv" (OuterVolumeSpecName: "kube-api-access-fbvmv") pod "4f02d7df-23c0-449f-91a8-29e7e2ee7775" (UID: "4f02d7df-23c0-449f-91a8-29e7e2ee7775"). InnerVolumeSpecName "kube-api-access-fbvmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:30:17 crc kubenswrapper[4897]: I0228 13:30:17.088293 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f02d7df-23c0-449f-91a8-29e7e2ee7775-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:17 crc kubenswrapper[4897]: I0228 13:30:17.088356 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbvmv\" (UniqueName: \"kubernetes.io/projected/4f02d7df-23c0-449f-91a8-29e7e2ee7775-kube-api-access-fbvmv\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:17 crc kubenswrapper[4897]: I0228 13:30:17.572435 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" Feb 28 13:30:17 crc kubenswrapper[4897]: I0228 13:30:17.572433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv" event={"ID":"4f02d7df-23c0-449f-91a8-29e7e2ee7775","Type":"ContainerDied","Data":"2f2ec1f2b6643d45a46885d7917b96ca04f3025855437de886ee123414aa05cd"} Feb 28 13:30:17 crc kubenswrapper[4897]: I0228 13:30:17.572805 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2ec1f2b6643d45a46885d7917b96ca04f3025855437de886ee123414aa05cd" Feb 28 13:30:17 crc kubenswrapper[4897]: I0228 13:30:17.574717 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538090-677fq" event={"ID":"292167e9-1fa2-4fda-b4da-f112d69333b9","Type":"ContainerStarted","Data":"6326c714aa0becce6cab0c4b4e7d26874e3251c2af46c1e91c15470dabda772e"} Feb 28 13:30:18 crc kubenswrapper[4897]: I0228 13:30:18.584006 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538090-677fq" event={"ID":"292167e9-1fa2-4fda-b4da-f112d69333b9","Type":"ContainerStarted","Data":"1483000b50b2264fb8ba8e9ee018cecbbb96c56fd84b4f0bda67a21b199841f1"} Feb 28 13:30:19 crc kubenswrapper[4897]: I0228 13:30:19.593905 4897 generic.go:334] "Generic (PLEG): container finished" podID="292167e9-1fa2-4fda-b4da-f112d69333b9" containerID="1483000b50b2264fb8ba8e9ee018cecbbb96c56fd84b4f0bda67a21b199841f1" exitCode=0 Feb 28 13:30:19 crc kubenswrapper[4897]: I0228 13:30:19.593962 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538090-677fq" event={"ID":"292167e9-1fa2-4fda-b4da-f112d69333b9","Type":"ContainerDied","Data":"1483000b50b2264fb8ba8e9ee018cecbbb96c56fd84b4f0bda67a21b199841f1"} Feb 28 13:30:20 crc kubenswrapper[4897]: I0228 13:30:20.880323 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:20 crc kubenswrapper[4897]: I0228 13:30:20.937903 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtjbh\" (UniqueName: \"kubernetes.io/projected/292167e9-1fa2-4fda-b4da-f112d69333b9-kube-api-access-vtjbh\") pod \"292167e9-1fa2-4fda-b4da-f112d69333b9\" (UID: \"292167e9-1fa2-4fda-b4da-f112d69333b9\") " Feb 28 13:30:20 crc kubenswrapper[4897]: I0228 13:30:20.962504 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292167e9-1fa2-4fda-b4da-f112d69333b9-kube-api-access-vtjbh" (OuterVolumeSpecName: "kube-api-access-vtjbh") pod "292167e9-1fa2-4fda-b4da-f112d69333b9" (UID: "292167e9-1fa2-4fda-b4da-f112d69333b9"). InnerVolumeSpecName "kube-api-access-vtjbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:30:21 crc kubenswrapper[4897]: I0228 13:30:21.039546 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtjbh\" (UniqueName: \"kubernetes.io/projected/292167e9-1fa2-4fda-b4da-f112d69333b9-kube-api-access-vtjbh\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:21 crc kubenswrapper[4897]: I0228 13:30:21.609144 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538090-677fq" event={"ID":"292167e9-1fa2-4fda-b4da-f112d69333b9","Type":"ContainerDied","Data":"6326c714aa0becce6cab0c4b4e7d26874e3251c2af46c1e91c15470dabda772e"} Feb 28 13:30:21 crc kubenswrapper[4897]: I0228 13:30:21.609575 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6326c714aa0becce6cab0c4b4e7d26874e3251c2af46c1e91c15470dabda772e" Feb 28 13:30:21 crc kubenswrapper[4897]: I0228 13:30:21.609210 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538090-677fq" Feb 28 13:30:21 crc kubenswrapper[4897]: I0228 13:30:21.948881 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538084-bdglw"] Feb 28 13:30:21 crc kubenswrapper[4897]: I0228 13:30:21.952803 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538084-bdglw"] Feb 28 13:30:22 crc kubenswrapper[4897]: I0228 13:30:22.468603 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96fa1520-75f6-47bb-bb62-92efc314da9c" path="/var/lib/kubelet/pods/96fa1520-75f6-47bb-bb62-92efc314da9c/volumes" Feb 28 13:30:24 crc kubenswrapper[4897]: I0228 13:30:24.455833 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:24 crc kubenswrapper[4897]: I0228 13:30:24.456770 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:24 crc kubenswrapper[4897]: I0228 13:30:24.928789 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf"] Feb 28 13:30:25 crc kubenswrapper[4897]: I0228 13:30:25.640839 4897 generic.go:334] "Generic (PLEG): container finished" podID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerID="1787a7298ffdc2a5638d2cbfc76a66f2b6d1c099d3ebc1ad7ecd1a3a168591c1" exitCode=0 Feb 28 13:30:25 crc kubenswrapper[4897]: I0228 13:30:25.640947 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" event={"ID":"10011d40-3da8-4a8f-b650-17d5bcbd7f8a","Type":"ContainerDied","Data":"1787a7298ffdc2a5638d2cbfc76a66f2b6d1c099d3ebc1ad7ecd1a3a168591c1"} Feb 28 13:30:25 crc kubenswrapper[4897]: I0228 13:30:25.641301 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" event={"ID":"10011d40-3da8-4a8f-b650-17d5bcbd7f8a","Type":"ContainerStarted","Data":"e306d7ae307fe1c3c3c4a425c51eb9c30c836d7081785c2fabf83dae6a3cb94f"} Feb 28 13:30:27 crc kubenswrapper[4897]: I0228 13:30:27.242156 4897 scope.go:117] "RemoveContainer" containerID="f546b1f3c469568c2025454375130e8ce54e4baee9391f9123cca8b844a5aa9f" Feb 28 13:30:27 crc kubenswrapper[4897]: I0228 13:30:27.287965 4897 scope.go:117] "RemoveContainer" containerID="56214f971268dd96636e53f8cc401bfde201673331066f07d1235c5bd4fef3e5" Feb 28 13:30:27 crc kubenswrapper[4897]: I0228 13:30:27.670876 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k4m7f_cd164967-b99b-47d0-a691-7d8118fa81ce/kube-multus/2.log" Feb 28 13:30:27 crc kubenswrapper[4897]: I0228 13:30:27.674729 4897 generic.go:334] "Generic (PLEG): container finished" podID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerID="990e52081d290ecc2c414dd7998875da22d8237e5b19da8e6d1d60a9a6cf56e4" exitCode=0 Feb 28 13:30:27 crc kubenswrapper[4897]: I0228 13:30:27.674816 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" event={"ID":"10011d40-3da8-4a8f-b650-17d5bcbd7f8a","Type":"ContainerDied","Data":"990e52081d290ecc2c414dd7998875da22d8237e5b19da8e6d1d60a9a6cf56e4"} Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.277624 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-85zxz"] Feb 28 13:30:28 crc kubenswrapper[4897]: E0228 13:30:28.278177 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f02d7df-23c0-449f-91a8-29e7e2ee7775" containerName="collect-profiles" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.278194 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f02d7df-23c0-449f-91a8-29e7e2ee7775" containerName="collect-profiles" Feb 28 13:30:28 crc kubenswrapper[4897]: E0228 13:30:28.278219 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292167e9-1fa2-4fda-b4da-f112d69333b9" containerName="oc" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.278226 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="292167e9-1fa2-4fda-b4da-f112d69333b9" containerName="oc" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.278369 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="292167e9-1fa2-4fda-b4da-f112d69333b9" containerName="oc" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.278383 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f02d7df-23c0-449f-91a8-29e7e2ee7775" containerName="collect-profiles" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.279238 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.287718 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85zxz"] Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.340005 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-utilities\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.340186 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjdb\" (UniqueName: \"kubernetes.io/projected/ae0da75b-2acb-4d09-8668-e25d86bfa55e-kube-api-access-8mjdb\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.340249 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-catalog-content\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.441436 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-utilities\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.441588 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mjdb\" (UniqueName: \"kubernetes.io/projected/ae0da75b-2acb-4d09-8668-e25d86bfa55e-kube-api-access-8mjdb\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.441640 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-catalog-content\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.442341 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-utilities\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.442673 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-catalog-content\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.473420 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mjdb\" (UniqueName: \"kubernetes.io/projected/ae0da75b-2acb-4d09-8668-e25d86bfa55e-kube-api-access-8mjdb\") pod \"redhat-operators-85zxz\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.630329 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.682188 4897 generic.go:334] "Generic (PLEG): container finished" podID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerID="65543beb779ec2ce202eda3ca29247bdc30ba2b4f83babcef64401c24bd823e5" exitCode=0 Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.682230 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" event={"ID":"10011d40-3da8-4a8f-b650-17d5bcbd7f8a","Type":"ContainerDied","Data":"65543beb779ec2ce202eda3ca29247bdc30ba2b4f83babcef64401c24bd823e5"} Feb 28 13:30:28 crc kubenswrapper[4897]: I0228 13:30:28.829909 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85zxz"] Feb 28 13:30:28 crc kubenswrapper[4897]: W0228 13:30:28.841573 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae0da75b_2acb_4d09_8668_e25d86bfa55e.slice/crio-689c6d78e010e6c9bff0c5cbeebb1e3bbfa94fa4010572eb6129ac37f510f2d2 WatchSource:0}: Error finding container 689c6d78e010e6c9bff0c5cbeebb1e3bbfa94fa4010572eb6129ac37f510f2d2: Status 404 returned error can't find the container with id 689c6d78e010e6c9bff0c5cbeebb1e3bbfa94fa4010572eb6129ac37f510f2d2 Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.160550 4897 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.690296 4897 generic.go:334] "Generic (PLEG): container finished" podID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerID="17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695" exitCode=0 Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.690456 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85zxz" event={"ID":"ae0da75b-2acb-4d09-8668-e25d86bfa55e","Type":"ContainerDied","Data":"17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695"} Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.690540 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85zxz" event={"ID":"ae0da75b-2acb-4d09-8668-e25d86bfa55e","Type":"ContainerStarted","Data":"689c6d78e010e6c9bff0c5cbeebb1e3bbfa94fa4010572eb6129ac37f510f2d2"} Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.917796 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.963157 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-util\") pod \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.963340 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tv2p\" (UniqueName: \"kubernetes.io/projected/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-kube-api-access-4tv2p\") pod \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.963365 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-bundle\") pod \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\" (UID: \"10011d40-3da8-4a8f-b650-17d5bcbd7f8a\") " Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.965347 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-bundle" (OuterVolumeSpecName: "bundle") pod "10011d40-3da8-4a8f-b650-17d5bcbd7f8a" (UID: "10011d40-3da8-4a8f-b650-17d5bcbd7f8a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.969388 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-kube-api-access-4tv2p" (OuterVolumeSpecName: "kube-api-access-4tv2p") pod "10011d40-3da8-4a8f-b650-17d5bcbd7f8a" (UID: "10011d40-3da8-4a8f-b650-17d5bcbd7f8a"). InnerVolumeSpecName "kube-api-access-4tv2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:30:29 crc kubenswrapper[4897]: I0228 13:30:29.980917 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-util" (OuterVolumeSpecName: "util") pod "10011d40-3da8-4a8f-b650-17d5bcbd7f8a" (UID: "10011d40-3da8-4a8f-b650-17d5bcbd7f8a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:30:30 crc kubenswrapper[4897]: I0228 13:30:30.065192 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tv2p\" (UniqueName: \"kubernetes.io/projected/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-kube-api-access-4tv2p\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:30 crc kubenswrapper[4897]: I0228 13:30:30.065250 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:30 crc kubenswrapper[4897]: I0228 13:30:30.065268 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10011d40-3da8-4a8f-b650-17d5bcbd7f8a-util\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:30 crc kubenswrapper[4897]: I0228 13:30:30.701190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" event={"ID":"10011d40-3da8-4a8f-b650-17d5bcbd7f8a","Type":"ContainerDied","Data":"e306d7ae307fe1c3c3c4a425c51eb9c30c836d7081785c2fabf83dae6a3cb94f"} Feb 28 13:30:30 crc kubenswrapper[4897]: I0228 13:30:30.701642 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e306d7ae307fe1c3c3c4a425c51eb9c30c836d7081785c2fabf83dae6a3cb94f" Feb 28 13:30:30 crc kubenswrapper[4897]: I0228 13:30:30.701518 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf" Feb 28 13:30:30 crc kubenswrapper[4897]: I0228 13:30:30.709458 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85zxz" event={"ID":"ae0da75b-2acb-4d09-8668-e25d86bfa55e","Type":"ContainerStarted","Data":"4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838"} Feb 28 13:30:31 crc kubenswrapper[4897]: I0228 13:30:31.717457 4897 generic.go:334] "Generic (PLEG): container finished" podID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerID="4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838" exitCode=0 Feb 28 13:30:31 crc kubenswrapper[4897]: I0228 13:30:31.717498 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85zxz" event={"ID":"ae0da75b-2acb-4d09-8668-e25d86bfa55e","Type":"ContainerDied","Data":"4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838"} Feb 28 13:30:32 crc kubenswrapper[4897]: I0228 13:30:32.726346 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85zxz" event={"ID":"ae0da75b-2acb-4d09-8668-e25d86bfa55e","Type":"ContainerStarted","Data":"32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e"} Feb 28 13:30:32 crc kubenswrapper[4897]: I0228 13:30:32.750138 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-85zxz" podStartSLOduration=2.2089675 podStartE2EDuration="4.750116053s" podCreationTimestamp="2026-02-28 13:30:28 +0000 UTC" firstStartedPulling="2026-02-28 13:30:29.691840657 +0000 UTC m=+843.934161314" lastFinishedPulling="2026-02-28 13:30:32.2329892 +0000 UTC m=+846.475309867" observedRunningTime="2026-02-28 13:30:32.745943278 +0000 UTC m=+846.988263975" watchObservedRunningTime="2026-02-28 13:30:32.750116053 +0000 UTC m=+846.992436720" Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.370811 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.370866 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.370905 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.371379 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cfa26661db45aebf66711b46c418e18106a8f8b0c44a8fe4fe4cb2094fde5cf6"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.371425 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://cfa26661db45aebf66711b46c418e18106a8f8b0c44a8fe4fe4cb2094fde5cf6" gracePeriod=600 Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.734236 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="cfa26661db45aebf66711b46c418e18106a8f8b0c44a8fe4fe4cb2094fde5cf6" exitCode=0 Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.734297 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"cfa26661db45aebf66711b46c418e18106a8f8b0c44a8fe4fe4cb2094fde5cf6"} Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.734626 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"ba683f1199708260a29f4bdafd88105c75a046d1fe9faa93c033d9e42ddff022"} Feb 28 13:30:33 crc kubenswrapper[4897]: I0228 13:30:33.734655 4897 scope.go:117] "RemoveContainer" containerID="f290e5ce6a8f9eb0ed11d10d65558a545341abdde26a9a87dc391672358e93e3" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.125893 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk"] Feb 28 13:30:38 crc kubenswrapper[4897]: E0228 13:30:38.126790 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerName="util" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.126807 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerName="util" Feb 28 13:30:38 crc kubenswrapper[4897]: E0228 13:30:38.126823 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerName="pull" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.126830 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerName="pull" Feb 28 13:30:38 crc kubenswrapper[4897]: E0228 13:30:38.126855 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerName="extract" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.126862 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerName="extract" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.126976 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="10011d40-3da8-4a8f-b650-17d5bcbd7f8a" containerName="extract" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.127492 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.131916 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.132181 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.132420 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-ljhf7" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.156888 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.160871 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wwvr\" (UniqueName: \"kubernetes.io/projected/13c34d90-e126-4392-9f0d-31436773d681-kube-api-access-2wwvr\") pod \"obo-prometheus-operator-68bc856cb9-w78wk\" (UID: \"13c34d90-e126-4392-9f0d-31436773d681\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.244639 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.245292 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.247050 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-rlfkr" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.249605 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.256424 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.257110 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.261349 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.261700 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wwvr\" (UniqueName: \"kubernetes.io/projected/13c34d90-e126-4392-9f0d-31436773d681-kube-api-access-2wwvr\") pod \"obo-prometheus-operator-68bc856cb9-w78wk\" (UID: \"13c34d90-e126-4392-9f0d-31436773d681\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.279763 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wwvr\" (UniqueName: \"kubernetes.io/projected/13c34d90-e126-4392-9f0d-31436773d681-kube-api-access-2wwvr\") pod \"obo-prometheus-operator-68bc856cb9-w78wk\" (UID: \"13c34d90-e126-4392-9f0d-31436773d681\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.311136 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.363240 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c28960c4-dba8-4bc2-8695-13bc86523823-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-sq8hd\" (UID: \"c28960c4-dba8-4bc2-8695-13bc86523823\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.363290 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c28960c4-dba8-4bc2-8695-13bc86523823-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-sq8hd\" (UID: \"c28960c4-dba8-4bc2-8695-13bc86523823\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.363360 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77de0da5-c400-4927-bd0f-15d2ba642291-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-pwwvk\" (UID: \"77de0da5-c400-4927-bd0f-15d2ba642291\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.363424 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77de0da5-c400-4927-bd0f-15d2ba642291-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-pwwvk\" (UID: \"77de0da5-c400-4927-bd0f-15d2ba642291\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.449227 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qkkz2"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.450059 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.452464 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.455572 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.457180 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2hxxf" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.464614 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77de0da5-c400-4927-bd0f-15d2ba642291-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-pwwvk\" (UID: \"77de0da5-c400-4927-bd0f-15d2ba642291\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.464694 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c28960c4-dba8-4bc2-8695-13bc86523823-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-sq8hd\" (UID: \"c28960c4-dba8-4bc2-8695-13bc86523823\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.464723 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c28960c4-dba8-4bc2-8695-13bc86523823-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-sq8hd\" (UID: \"c28960c4-dba8-4bc2-8695-13bc86523823\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.464765 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77de0da5-c400-4927-bd0f-15d2ba642291-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-pwwvk\" (UID: \"77de0da5-c400-4927-bd0f-15d2ba642291\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.471935 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c28960c4-dba8-4bc2-8695-13bc86523823-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-sq8hd\" (UID: \"c28960c4-dba8-4bc2-8695-13bc86523823\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.474015 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c28960c4-dba8-4bc2-8695-13bc86523823-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-sq8hd\" (UID: \"c28960c4-dba8-4bc2-8695-13bc86523823\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.474518 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77de0da5-c400-4927-bd0f-15d2ba642291-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-pwwvk\" (UID: \"77de0da5-c400-4927-bd0f-15d2ba642291\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.474655 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77de0da5-c400-4927-bd0f-15d2ba642291-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-767759c544-pwwvk\" (UID: \"77de0da5-c400-4927-bd0f-15d2ba642291\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.480971 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qkkz2"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.558086 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.561006 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tr862"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.561705 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.565832 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a1168a-8c63-4e9c-aefc-732c90395b55-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qkkz2\" (UID: \"b1a1168a-8c63-4e9c-aefc-732c90395b55\") " pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.565936 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xd2\" (UniqueName: \"kubernetes.io/projected/b1a1168a-8c63-4e9c-aefc-732c90395b55-kube-api-access-c8xd2\") pod \"observability-operator-59bdc8b94-qkkz2\" (UID: \"b1a1168a-8c63-4e9c-aefc-732c90395b55\") " pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.567701 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-5zscl" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.568487 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.578252 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tr862"] Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.634177 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.634234 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.668357 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/799fd3ea-6ae8-4568-a69b-3e8c2a706b76-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tr862\" (UID: \"799fd3ea-6ae8-4568-a69b-3e8c2a706b76\") " pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.668396 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8xd2\" (UniqueName: \"kubernetes.io/projected/b1a1168a-8c63-4e9c-aefc-732c90395b55-kube-api-access-c8xd2\") pod \"observability-operator-59bdc8b94-qkkz2\" (UID: \"b1a1168a-8c63-4e9c-aefc-732c90395b55\") " pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.668436 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a1168a-8c63-4e9c-aefc-732c90395b55-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qkkz2\" (UID: \"b1a1168a-8c63-4e9c-aefc-732c90395b55\") " pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.668472 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4pts\" (UniqueName: \"kubernetes.io/projected/799fd3ea-6ae8-4568-a69b-3e8c2a706b76-kube-api-access-t4pts\") pod \"perses-operator-5bf474d74f-tr862\" (UID: \"799fd3ea-6ae8-4568-a69b-3e8c2a706b76\") " pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.679120 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a1168a-8c63-4e9c-aefc-732c90395b55-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qkkz2\" (UID: \"b1a1168a-8c63-4e9c-aefc-732c90395b55\") " pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.700575 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8xd2\" (UniqueName: \"kubernetes.io/projected/b1a1168a-8c63-4e9c-aefc-732c90395b55-kube-api-access-c8xd2\") pod \"observability-operator-59bdc8b94-qkkz2\" (UID: \"b1a1168a-8c63-4e9c-aefc-732c90395b55\") " pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.770600 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.770929 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/799fd3ea-6ae8-4568-a69b-3e8c2a706b76-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tr862\" (UID: \"799fd3ea-6ae8-4568-a69b-3e8c2a706b76\") " pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.771021 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4pts\" (UniqueName: \"kubernetes.io/projected/799fd3ea-6ae8-4568-a69b-3e8c2a706b76-kube-api-access-t4pts\") pod \"perses-operator-5bf474d74f-tr862\" (UID: \"799fd3ea-6ae8-4568-a69b-3e8c2a706b76\") " pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.772188 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/799fd3ea-6ae8-4568-a69b-3e8c2a706b76-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tr862\" (UID: \"799fd3ea-6ae8-4568-a69b-3e8c2a706b76\") " pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.802011 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4pts\" (UniqueName: \"kubernetes.io/projected/799fd3ea-6ae8-4568-a69b-3e8c2a706b76-kube-api-access-t4pts\") pod \"perses-operator-5bf474d74f-tr862\" (UID: \"799fd3ea-6ae8-4568-a69b-3e8c2a706b76\") " pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:38 crc kubenswrapper[4897]: I0228 13:30:38.951340 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.052074 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk"] Feb 28 13:30:39 crc kubenswrapper[4897]: W0228 13:30:39.067520 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c34d90_e126_4392_9f0d_31436773d681.slice/crio-a1e172f46b6396e288527fec1b01ef101005f8513a253429dc65e8612b09c699 WatchSource:0}: Error finding container a1e172f46b6396e288527fec1b01ef101005f8513a253429dc65e8612b09c699: Status 404 returned error can't find the container with id a1e172f46b6396e288527fec1b01ef101005f8513a253429dc65e8612b09c699 Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.069995 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd"] Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.102762 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qkkz2"] Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.143428 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk"] Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.218955 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tr862"] Feb 28 13:30:39 crc kubenswrapper[4897]: W0228 13:30:39.220476 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799fd3ea_6ae8_4568_a69b_3e8c2a706b76.slice/crio-956a795619d20f89bfba4dc71842ed473a066da81a9cc0e3b893f7540e458c1a WatchSource:0}: Error finding container 956a795619d20f89bfba4dc71842ed473a066da81a9cc0e3b893f7540e458c1a: Status 404 returned error can't find the container with id 956a795619d20f89bfba4dc71842ed473a066da81a9cc0e3b893f7540e458c1a Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.771484 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85zxz" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="registry-server" probeResult="failure" output=< Feb 28 13:30:39 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:30:39 crc kubenswrapper[4897]: > Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.773197 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tr862" event={"ID":"799fd3ea-6ae8-4568-a69b-3e8c2a706b76","Type":"ContainerStarted","Data":"956a795619d20f89bfba4dc71842ed473a066da81a9cc0e3b893f7540e458c1a"} Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.774050 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" event={"ID":"b1a1168a-8c63-4e9c-aefc-732c90395b55","Type":"ContainerStarted","Data":"7a93cad71f655be1be9bac3178ac42bbd5568ac46a51d2ac89c989ef339f2c4b"} Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.774904 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" event={"ID":"77de0da5-c400-4927-bd0f-15d2ba642291","Type":"ContainerStarted","Data":"b25b301e86ded2fcd1b44c6f8c650c0663a47c91ac34dc6ae4b98e4aa7a82200"} Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.775733 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk" event={"ID":"13c34d90-e126-4392-9f0d-31436773d681","Type":"ContainerStarted","Data":"a1e172f46b6396e288527fec1b01ef101005f8513a253429dc65e8612b09c699"} Feb 28 13:30:39 crc kubenswrapper[4897]: I0228 13:30:39.777523 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" event={"ID":"c28960c4-dba8-4bc2-8695-13bc86523823","Type":"ContainerStarted","Data":"ab78b71536a1b7df474fa6e60872e21b4f34f9b37c1b1f41c164370aeab4aa26"} Feb 28 13:30:48 crc kubenswrapper[4897]: I0228 13:30:48.701948 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:48 crc kubenswrapper[4897]: I0228 13:30:48.753372 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:49 crc kubenswrapper[4897]: I0228 13:30:49.446962 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85zxz"] Feb 28 13:30:49 crc kubenswrapper[4897]: I0228 13:30:49.848003 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-85zxz" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="registry-server" containerID="cri-o://32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e" gracePeriod=2 Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.441489 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.589454 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-catalog-content\") pod \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.589611 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-utilities\") pod \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.589651 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mjdb\" (UniqueName: \"kubernetes.io/projected/ae0da75b-2acb-4d09-8668-e25d86bfa55e-kube-api-access-8mjdb\") pod \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\" (UID: \"ae0da75b-2acb-4d09-8668-e25d86bfa55e\") " Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.590711 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-utilities" (OuterVolumeSpecName: "utilities") pod "ae0da75b-2acb-4d09-8668-e25d86bfa55e" (UID: "ae0da75b-2acb-4d09-8668-e25d86bfa55e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.594418 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0da75b-2acb-4d09-8668-e25d86bfa55e-kube-api-access-8mjdb" (OuterVolumeSpecName: "kube-api-access-8mjdb") pod "ae0da75b-2acb-4d09-8668-e25d86bfa55e" (UID: "ae0da75b-2acb-4d09-8668-e25d86bfa55e"). InnerVolumeSpecName "kube-api-access-8mjdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.691511 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.691541 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mjdb\" (UniqueName: \"kubernetes.io/projected/ae0da75b-2acb-4d09-8668-e25d86bfa55e-kube-api-access-8mjdb\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.761426 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae0da75b-2acb-4d09-8668-e25d86bfa55e" (UID: "ae0da75b-2acb-4d09-8668-e25d86bfa55e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.792511 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae0da75b-2acb-4d09-8668-e25d86bfa55e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.855211 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" event={"ID":"77de0da5-c400-4927-bd0f-15d2ba642291","Type":"ContainerStarted","Data":"a24881e3fab27152312f3a1c48f98bb0c079ae19196120fd943839ee016def84"} Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.858038 4897 generic.go:334] "Generic (PLEG): container finished" podID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerID="32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e" exitCode=0 Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.858153 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85zxz" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.858409 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85zxz" event={"ID":"ae0da75b-2acb-4d09-8668-e25d86bfa55e","Type":"ContainerDied","Data":"32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e"} Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.858564 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85zxz" event={"ID":"ae0da75b-2acb-4d09-8668-e25d86bfa55e","Type":"ContainerDied","Data":"689c6d78e010e6c9bff0c5cbeebb1e3bbfa94fa4010572eb6129ac37f510f2d2"} Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.858585 4897 scope.go:117] "RemoveContainer" containerID="32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.859967 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk" event={"ID":"13c34d90-e126-4392-9f0d-31436773d681","Type":"ContainerStarted","Data":"94bf66c05a3acf5d2f082f4e8ed11eba5cf9435463814420f4f476df99edece3"} Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.861898 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" event={"ID":"c28960c4-dba8-4bc2-8695-13bc86523823","Type":"ContainerStarted","Data":"c5032327f3aeeedb7dd58775be1d546ebb765be57bae848ed560c28f8c45fb5d"} Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.880152 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tr862" event={"ID":"799fd3ea-6ae8-4568-a69b-3e8c2a706b76","Type":"ContainerStarted","Data":"e2e6d237c704184db5cc40d077d64e542f19d53b72cc53fe34a83bc8de3b6086"} Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.880505 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.881458 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-pwwvk" podStartSLOduration=1.850239161 podStartE2EDuration="12.881441094s" podCreationTimestamp="2026-02-28 13:30:38 +0000 UTC" firstStartedPulling="2026-02-28 13:30:39.157026426 +0000 UTC m=+853.399347083" lastFinishedPulling="2026-02-28 13:30:50.188228359 +0000 UTC m=+864.430549016" observedRunningTime="2026-02-28 13:30:50.876438426 +0000 UTC m=+865.118759093" watchObservedRunningTime="2026-02-28 13:30:50.881441094 +0000 UTC m=+865.123761751" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.881678 4897 scope.go:117] "RemoveContainer" containerID="4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.891512 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" event={"ID":"b1a1168a-8c63-4e9c-aefc-732c90395b55","Type":"ContainerStarted","Data":"6e2e45fca88ce74361051502c00c313a645f254958fd06657d5837df2c8fd406"} Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.892485 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.907655 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.913154 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-767759c544-sq8hd" podStartSLOduration=1.81528978 podStartE2EDuration="12.913138546s" podCreationTimestamp="2026-02-28 13:30:38 +0000 UTC" firstStartedPulling="2026-02-28 13:30:39.089421696 +0000 UTC m=+853.331742353" lastFinishedPulling="2026-02-28 13:30:50.187270462 +0000 UTC m=+864.429591119" observedRunningTime="2026-02-28 13:30:50.907970464 +0000 UTC m=+865.150291121" watchObservedRunningTime="2026-02-28 13:30:50.913138546 +0000 UTC m=+865.155459203" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.919633 4897 scope.go:117] "RemoveContainer" containerID="17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.949816 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w78wk" podStartSLOduration=1.806509158 podStartE2EDuration="12.949794314s" podCreationTimestamp="2026-02-28 13:30:38 +0000 UTC" firstStartedPulling="2026-02-28 13:30:39.070111715 +0000 UTC m=+853.312432372" lastFinishedPulling="2026-02-28 13:30:50.213396871 +0000 UTC m=+864.455717528" observedRunningTime="2026-02-28 13:30:50.937708511 +0000 UTC m=+865.180029168" watchObservedRunningTime="2026-02-28 13:30:50.949794314 +0000 UTC m=+865.192114971" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.958465 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85zxz"] Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.969008 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-85zxz"] Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.969516 4897 scope.go:117] "RemoveContainer" containerID="32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e" Feb 28 13:30:50 crc kubenswrapper[4897]: E0228 13:30:50.969989 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e\": container with ID starting with 32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e not found: ID does not exist" containerID="32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.970023 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e"} err="failed to get container status \"32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e\": rpc error: code = NotFound desc = could not find container \"32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e\": container with ID starting with 32621c7dbb39c8f9e7436e83234049ba6fad1057ace25a2e860832537e955a6e not found: ID does not exist" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.970072 4897 scope.go:117] "RemoveContainer" containerID="4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838" Feb 28 13:30:50 crc kubenswrapper[4897]: E0228 13:30:50.970388 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838\": container with ID starting with 4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838 not found: ID does not exist" containerID="4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.970409 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838"} err="failed to get container status \"4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838\": rpc error: code = NotFound desc = could not find container \"4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838\": container with ID starting with 4cf0e4b351f0465d88f5cba2e2b60c7d49048641546d7a7dee787854bf141838 not found: ID does not exist" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.970426 4897 scope.go:117] "RemoveContainer" containerID="17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695" Feb 28 13:30:50 crc kubenswrapper[4897]: E0228 13:30:50.970626 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695\": container with ID starting with 17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695 not found: ID does not exist" containerID="17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695" Feb 28 13:30:50 crc kubenswrapper[4897]: I0228 13:30:50.970644 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695"} err="failed to get container status \"17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695\": rpc error: code = NotFound desc = could not find container \"17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695\": container with ID starting with 17a088d342d7ff7dc2e6e3177ebda496d0be357bbebbb1b443e868aa9742b695 not found: ID does not exist" Feb 28 13:30:51 crc kubenswrapper[4897]: I0228 13:30:51.014249 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-qkkz2" podStartSLOduration=1.878726834 podStartE2EDuration="13.014232766s" podCreationTimestamp="2026-02-28 13:30:38 +0000 UTC" firstStartedPulling="2026-02-28 13:30:39.115083462 +0000 UTC m=+853.357404109" lastFinishedPulling="2026-02-28 13:30:50.250589374 +0000 UTC m=+864.492910041" observedRunningTime="2026-02-28 13:30:51.010699219 +0000 UTC m=+865.253019876" watchObservedRunningTime="2026-02-28 13:30:51.014232766 +0000 UTC m=+865.256553423" Feb 28 13:30:51 crc kubenswrapper[4897]: I0228 13:30:51.040321 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tr862" podStartSLOduration=2.063328421 podStartE2EDuration="13.040291583s" podCreationTimestamp="2026-02-28 13:30:38 +0000 UTC" firstStartedPulling="2026-02-28 13:30:39.223517964 +0000 UTC m=+853.465838621" lastFinishedPulling="2026-02-28 13:30:50.200481116 +0000 UTC m=+864.442801783" observedRunningTime="2026-02-28 13:30:51.036151889 +0000 UTC m=+865.278472536" watchObservedRunningTime="2026-02-28 13:30:51.040291583 +0000 UTC m=+865.282612240" Feb 28 13:30:52 crc kubenswrapper[4897]: I0228 13:30:52.463981 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" path="/var/lib/kubelet/pods/ae0da75b-2acb-4d09-8668-e25d86bfa55e/volumes" Feb 28 13:30:58 crc kubenswrapper[4897]: I0228 13:30:58.955570 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tr862" Feb 28 13:31:06 crc kubenswrapper[4897]: I0228 13:31:06.879298 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnn6v"] Feb 28 13:31:06 crc kubenswrapper[4897]: E0228 13:31:06.880028 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="registry-server" Feb 28 13:31:06 crc kubenswrapper[4897]: I0228 13:31:06.880042 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="registry-server" Feb 28 13:31:06 crc kubenswrapper[4897]: E0228 13:31:06.880055 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="extract-utilities" Feb 28 13:31:06 crc kubenswrapper[4897]: I0228 13:31:06.880064 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="extract-utilities" Feb 28 13:31:06 crc kubenswrapper[4897]: E0228 13:31:06.880082 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="extract-content" Feb 28 13:31:06 crc kubenswrapper[4897]: I0228 13:31:06.880091 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="extract-content" Feb 28 13:31:06 crc kubenswrapper[4897]: I0228 13:31:06.880232 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0da75b-2acb-4d09-8668-e25d86bfa55e" containerName="registry-server" Feb 28 13:31:06 crc kubenswrapper[4897]: I0228 13:31:06.881242 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:06 crc kubenswrapper[4897]: I0228 13:31:06.891408 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnn6v"] Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.031364 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-catalog-content\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.031427 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6wd6\" (UniqueName: \"kubernetes.io/projected/faf31eda-0152-4d44-88fb-da9b4bccdb08-kube-api-access-m6wd6\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.031462 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-utilities\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.133986 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-catalog-content\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.134103 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6wd6\" (UniqueName: \"kubernetes.io/projected/faf31eda-0152-4d44-88fb-da9b4bccdb08-kube-api-access-m6wd6\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.134179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-utilities\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.134789 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-catalog-content\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.135030 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-utilities\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.159539 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6wd6\" (UniqueName: \"kubernetes.io/projected/faf31eda-0152-4d44-88fb-da9b4bccdb08-kube-api-access-m6wd6\") pod \"redhat-marketplace-rnn6v\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.215926 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.464525 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnn6v"] Feb 28 13:31:07 crc kubenswrapper[4897]: W0228 13:31:07.477655 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf31eda_0152_4d44_88fb_da9b4bccdb08.slice/crio-9311be56258ee0ef4fa63dd504c84699e85f39d23e4a178c9a67469ab1a7dbaf WatchSource:0}: Error finding container 9311be56258ee0ef4fa63dd504c84699e85f39d23e4a178c9a67469ab1a7dbaf: Status 404 returned error can't find the container with id 9311be56258ee0ef4fa63dd504c84699e85f39d23e4a178c9a67469ab1a7dbaf Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.991867 4897 generic.go:334] "Generic (PLEG): container finished" podID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerID="df948013fa238d5b14f914fe5b142cdb26eeedd044616c3cccef22c98e9465ea" exitCode=0 Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.991953 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnn6v" event={"ID":"faf31eda-0152-4d44-88fb-da9b4bccdb08","Type":"ContainerDied","Data":"df948013fa238d5b14f914fe5b142cdb26eeedd044616c3cccef22c98e9465ea"} Feb 28 13:31:07 crc kubenswrapper[4897]: I0228 13:31:07.992371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnn6v" event={"ID":"faf31eda-0152-4d44-88fb-da9b4bccdb08","Type":"ContainerStarted","Data":"9311be56258ee0ef4fa63dd504c84699e85f39d23e4a178c9a67469ab1a7dbaf"} Feb 28 13:31:08 crc kubenswrapper[4897]: I0228 13:31:07.996498 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 13:31:09 crc kubenswrapper[4897]: I0228 13:31:09.000318 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnn6v" event={"ID":"faf31eda-0152-4d44-88fb-da9b4bccdb08","Type":"ContainerStarted","Data":"1e5ca036ae66058903606c3f4e5464f5415ae211545635bea543bfdefce497c6"} Feb 28 13:31:10 crc kubenswrapper[4897]: I0228 13:31:10.008718 4897 generic.go:334] "Generic (PLEG): container finished" podID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerID="1e5ca036ae66058903606c3f4e5464f5415ae211545635bea543bfdefce497c6" exitCode=0 Feb 28 13:31:10 crc kubenswrapper[4897]: I0228 13:31:10.008776 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnn6v" event={"ID":"faf31eda-0152-4d44-88fb-da9b4bccdb08","Type":"ContainerDied","Data":"1e5ca036ae66058903606c3f4e5464f5415ae211545635bea543bfdefce497c6"} Feb 28 13:31:11 crc kubenswrapper[4897]: I0228 13:31:11.017769 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnn6v" event={"ID":"faf31eda-0152-4d44-88fb-da9b4bccdb08","Type":"ContainerStarted","Data":"5e107cc508123adb95c13a496ba469e4518a32accd4952e6a25da87f5d50a0fd"} Feb 28 13:31:11 crc kubenswrapper[4897]: I0228 13:31:11.038529 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnn6v" podStartSLOduration=2.419660807 podStartE2EDuration="5.038505785s" podCreationTimestamp="2026-02-28 13:31:06 +0000 UTC" firstStartedPulling="2026-02-28 13:31:07.995388028 +0000 UTC m=+882.237708695" lastFinishedPulling="2026-02-28 13:31:10.614233016 +0000 UTC m=+884.856553673" observedRunningTime="2026-02-28 13:31:11.034589067 +0000 UTC m=+885.276909724" watchObservedRunningTime="2026-02-28 13:31:11.038505785 +0000 UTC m=+885.280826452" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.532136 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt"] Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.534087 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.536247 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.542954 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.543289 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxjg\" (UniqueName: \"kubernetes.io/projected/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-kube-api-access-flxjg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.543466 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.548471 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt"] Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.644439 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.644557 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxjg\" (UniqueName: \"kubernetes.io/projected/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-kube-api-access-flxjg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.644617 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.645346 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.645437 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.682168 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxjg\" (UniqueName: \"kubernetes.io/projected/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-kube-api-access-flxjg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:15 crc kubenswrapper[4897]: I0228 13:31:15.864704 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:16 crc kubenswrapper[4897]: I0228 13:31:16.119344 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt"] Feb 28 13:31:16 crc kubenswrapper[4897]: W0228 13:31:16.129753 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb9ce3a3_2d08_4df3_b0c5_246ef0bfc641.slice/crio-67ed74e0503792bdce97f71660023e64ce662c815625116f92227a939fbacff3 WatchSource:0}: Error finding container 67ed74e0503792bdce97f71660023e64ce662c815625116f92227a939fbacff3: Status 404 returned error can't find the container with id 67ed74e0503792bdce97f71660023e64ce662c815625116f92227a939fbacff3 Feb 28 13:31:17 crc kubenswrapper[4897]: I0228 13:31:17.060495 4897 generic.go:334] "Generic (PLEG): container finished" podID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerID="7339b02904071b7c17d5c8a4e86bcdd8c2267cc4f98ffc6172e106bc38a2d58c" exitCode=0 Feb 28 13:31:17 crc kubenswrapper[4897]: I0228 13:31:17.060576 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" event={"ID":"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641","Type":"ContainerDied","Data":"7339b02904071b7c17d5c8a4e86bcdd8c2267cc4f98ffc6172e106bc38a2d58c"} Feb 28 13:31:17 crc kubenswrapper[4897]: I0228 13:31:17.060655 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" event={"ID":"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641","Type":"ContainerStarted","Data":"67ed74e0503792bdce97f71660023e64ce662c815625116f92227a939fbacff3"} Feb 28 13:31:17 crc kubenswrapper[4897]: I0228 13:31:17.216958 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:17 crc kubenswrapper[4897]: I0228 13:31:17.217085 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:17 crc kubenswrapper[4897]: I0228 13:31:17.294281 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:18 crc kubenswrapper[4897]: I0228 13:31:18.145145 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:19 crc kubenswrapper[4897]: I0228 13:31:19.885178 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnn6v"] Feb 28 13:31:20 crc kubenswrapper[4897]: I0228 13:31:20.083040 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnn6v" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerName="registry-server" containerID="cri-o://5e107cc508123adb95c13a496ba469e4518a32accd4952e6a25da87f5d50a0fd" gracePeriod=2 Feb 28 13:31:21 crc kubenswrapper[4897]: I0228 13:31:21.090354 4897 generic.go:334] "Generic (PLEG): container finished" podID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerID="5e107cc508123adb95c13a496ba469e4518a32accd4952e6a25da87f5d50a0fd" exitCode=0 Feb 28 13:31:21 crc kubenswrapper[4897]: I0228 13:31:21.090418 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnn6v" event={"ID":"faf31eda-0152-4d44-88fb-da9b4bccdb08","Type":"ContainerDied","Data":"5e107cc508123adb95c13a496ba469e4518a32accd4952e6a25da87f5d50a0fd"} Feb 28 13:31:23 crc kubenswrapper[4897]: I0228 13:31:23.847084 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:23 crc kubenswrapper[4897]: I0228 13:31:23.956570 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6wd6\" (UniqueName: \"kubernetes.io/projected/faf31eda-0152-4d44-88fb-da9b4bccdb08-kube-api-access-m6wd6\") pod \"faf31eda-0152-4d44-88fb-da9b4bccdb08\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " Feb 28 13:31:23 crc kubenswrapper[4897]: I0228 13:31:23.956702 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-catalog-content\") pod \"faf31eda-0152-4d44-88fb-da9b4bccdb08\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " Feb 28 13:31:23 crc kubenswrapper[4897]: I0228 13:31:23.956802 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-utilities\") pod \"faf31eda-0152-4d44-88fb-da9b4bccdb08\" (UID: \"faf31eda-0152-4d44-88fb-da9b4bccdb08\") " Feb 28 13:31:23 crc kubenswrapper[4897]: I0228 13:31:23.957687 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-utilities" (OuterVolumeSpecName: "utilities") pod "faf31eda-0152-4d44-88fb-da9b4bccdb08" (UID: "faf31eda-0152-4d44-88fb-da9b4bccdb08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:31:23 crc kubenswrapper[4897]: I0228 13:31:23.966686 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf31eda-0152-4d44-88fb-da9b4bccdb08-kube-api-access-m6wd6" (OuterVolumeSpecName: "kube-api-access-m6wd6") pod "faf31eda-0152-4d44-88fb-da9b4bccdb08" (UID: "faf31eda-0152-4d44-88fb-da9b4bccdb08"). InnerVolumeSpecName "kube-api-access-m6wd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.005772 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faf31eda-0152-4d44-88fb-da9b4bccdb08" (UID: "faf31eda-0152-4d44-88fb-da9b4bccdb08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.057955 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.057990 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6wd6\" (UniqueName: \"kubernetes.io/projected/faf31eda-0152-4d44-88fb-da9b4bccdb08-kube-api-access-m6wd6\") on node \"crc\" DevicePath \"\"" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.058017 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf31eda-0152-4d44-88fb-da9b4bccdb08-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.112282 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnn6v" event={"ID":"faf31eda-0152-4d44-88fb-da9b4bccdb08","Type":"ContainerDied","Data":"9311be56258ee0ef4fa63dd504c84699e85f39d23e4a178c9a67469ab1a7dbaf"} Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.112354 4897 scope.go:117] "RemoveContainer" containerID="5e107cc508123adb95c13a496ba469e4518a32accd4952e6a25da87f5d50a0fd" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.112648 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnn6v" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.147574 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnn6v"] Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.147703 4897 scope.go:117] "RemoveContainer" containerID="1e5ca036ae66058903606c3f4e5464f5415ae211545635bea543bfdefce497c6" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.158016 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnn6v"] Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.178838 4897 scope.go:117] "RemoveContainer" containerID="df948013fa238d5b14f914fe5b142cdb26eeedd044616c3cccef22c98e9465ea" Feb 28 13:31:24 crc kubenswrapper[4897]: I0228 13:31:24.464178 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" path="/var/lib/kubelet/pods/faf31eda-0152-4d44-88fb-da9b4bccdb08/volumes" Feb 28 13:31:25 crc kubenswrapper[4897]: I0228 13:31:25.125555 4897 generic.go:334] "Generic (PLEG): container finished" podID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerID="5d016bc08740badb9a9e95abd1a94f0e7a7c301cbcfc700821736396d098023c" exitCode=0 Feb 28 13:31:25 crc kubenswrapper[4897]: I0228 13:31:25.125632 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" event={"ID":"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641","Type":"ContainerDied","Data":"5d016bc08740badb9a9e95abd1a94f0e7a7c301cbcfc700821736396d098023c"} Feb 28 13:31:26 crc kubenswrapper[4897]: I0228 13:31:26.133707 4897 generic.go:334] "Generic (PLEG): container finished" podID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerID="6214b39b2be3d31b039e9ce90fdb176c9ae5c8d97fb8ce49ffd2ca3a36d04f26" exitCode=0 Feb 28 13:31:26 crc kubenswrapper[4897]: I0228 13:31:26.133773 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" event={"ID":"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641","Type":"ContainerDied","Data":"6214b39b2be3d31b039e9ce90fdb176c9ae5c8d97fb8ce49ffd2ca3a36d04f26"} Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.419738 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.506559 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxjg\" (UniqueName: \"kubernetes.io/projected/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-kube-api-access-flxjg\") pod \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.506711 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-util\") pod \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.506776 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-bundle\") pod \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\" (UID: \"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641\") " Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.507527 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-bundle" (OuterVolumeSpecName: "bundle") pod "fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" (UID: "fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.515811 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-kube-api-access-flxjg" (OuterVolumeSpecName: "kube-api-access-flxjg") pod "fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" (UID: "fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641"). InnerVolumeSpecName "kube-api-access-flxjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.524641 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-util" (OuterVolumeSpecName: "util") pod "fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" (UID: "fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.607704 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-util\") on node \"crc\" DevicePath \"\"" Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.607760 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:31:27 crc kubenswrapper[4897]: I0228 13:31:27.607791 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flxjg\" (UniqueName: \"kubernetes.io/projected/fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641-kube-api-access-flxjg\") on node \"crc\" DevicePath \"\"" Feb 28 13:31:28 crc kubenswrapper[4897]: I0228 13:31:28.152847 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" event={"ID":"fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641","Type":"ContainerDied","Data":"67ed74e0503792bdce97f71660023e64ce662c815625116f92227a939fbacff3"} Feb 28 13:31:28 crc kubenswrapper[4897]: I0228 13:31:28.153244 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67ed74e0503792bdce97f71660023e64ce662c815625116f92227a939fbacff3" Feb 28 13:31:28 crc kubenswrapper[4897]: I0228 13:31:28.152961 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076097 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-468lb"] Feb 28 13:31:32 crc kubenswrapper[4897]: E0228 13:31:32.076664 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerName="pull" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076680 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerName="pull" Feb 28 13:31:32 crc kubenswrapper[4897]: E0228 13:31:32.076691 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerName="extract-utilities" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076699 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerName="extract-utilities" Feb 28 13:31:32 crc kubenswrapper[4897]: E0228 13:31:32.076713 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerName="util" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076721 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerName="util" Feb 28 13:31:32 crc kubenswrapper[4897]: E0228 13:31:32.076734 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerName="registry-server" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076742 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerName="registry-server" Feb 28 13:31:32 crc kubenswrapper[4897]: E0228 13:31:32.076756 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerName="extract" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076763 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerName="extract" Feb 28 13:31:32 crc kubenswrapper[4897]: E0228 13:31:32.076784 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerName="extract-content" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076792 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerName="extract-content" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076925 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf31eda-0152-4d44-88fb-da9b4bccdb08" containerName="registry-server" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.076953 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641" containerName="extract" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.077687 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-468lb" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.079520 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tv78c" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.080774 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.080793 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.101928 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-468lb"] Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.271858 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hr5r\" (UniqueName: \"kubernetes.io/projected/ce9efcef-4478-4127-a41e-9e9960084a46-kube-api-access-2hr5r\") pod \"nmstate-operator-75c5dccd6c-468lb\" (UID: \"ce9efcef-4478-4127-a41e-9e9960084a46\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-468lb" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.373665 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hr5r\" (UniqueName: \"kubernetes.io/projected/ce9efcef-4478-4127-a41e-9e9960084a46-kube-api-access-2hr5r\") pod \"nmstate-operator-75c5dccd6c-468lb\" (UID: \"ce9efcef-4478-4127-a41e-9e9960084a46\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-468lb" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.395685 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hr5r\" (UniqueName: \"kubernetes.io/projected/ce9efcef-4478-4127-a41e-9e9960084a46-kube-api-access-2hr5r\") pod \"nmstate-operator-75c5dccd6c-468lb\" (UID: \"ce9efcef-4478-4127-a41e-9e9960084a46\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-468lb" Feb 28 13:31:32 crc kubenswrapper[4897]: I0228 13:31:32.694795 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-468lb" Feb 28 13:31:33 crc kubenswrapper[4897]: I0228 13:31:33.144154 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-468lb"] Feb 28 13:31:33 crc kubenswrapper[4897]: I0228 13:31:33.183906 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-468lb" event={"ID":"ce9efcef-4478-4127-a41e-9e9960084a46","Type":"ContainerStarted","Data":"dc34626d32995b2c331afcde1476d3470f7ac7cd59bd531c0b9a235419e0844a"} Feb 28 13:31:36 crc kubenswrapper[4897]: I0228 13:31:36.205811 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-468lb" event={"ID":"ce9efcef-4478-4127-a41e-9e9960084a46","Type":"ContainerStarted","Data":"6342e8af20ed8406de5f65e9d692822b3a669120a6b82499f7afef591f5e8e18"} Feb 28 13:31:36 crc kubenswrapper[4897]: I0228 13:31:36.240830 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-468lb" podStartSLOduration=1.612697365 podStartE2EDuration="4.240812525s" podCreationTimestamp="2026-02-28 13:31:32 +0000 UTC" firstStartedPulling="2026-02-28 13:31:33.152220345 +0000 UTC m=+907.394541022" lastFinishedPulling="2026-02-28 13:31:35.780335515 +0000 UTC m=+910.022656182" observedRunningTime="2026-02-28 13:31:36.239030413 +0000 UTC m=+910.481351080" watchObservedRunningTime="2026-02-28 13:31:36.240812525 +0000 UTC m=+910.483133182" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.156420 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-64kn2"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.157519 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.160178 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-s2ccj" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.168030 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-64kn2"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.181626 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.182394 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.188452 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.198103 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.223340 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-w8lgm"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.223993 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.337651 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.338368 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.340169 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-v6rt8" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.341515 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.341608 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.346030 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.347511 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xf6\" (UniqueName: \"kubernetes.io/projected/a16c5c73-6515-4d5b-898e-aa6d3940f0b1-kube-api-access-b9xf6\") pod \"nmstate-metrics-69594cc75-64kn2\" (UID: \"a16c5c73-6515-4d5b-898e-aa6d3940f0b1\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.347618 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-nmstate-lock\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.347655 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5ae61471-c126-4bb0-b7c5-1b56f1686ecc-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qtkdc\" (UID: \"5ae61471-c126-4bb0-b7c5-1b56f1686ecc\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.347681 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-dbus-socket\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.347700 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-ovs-socket\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.347728 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2pwn\" (UniqueName: \"kubernetes.io/projected/5ae61471-c126-4bb0-b7c5-1b56f1686ecc-kube-api-access-n2pwn\") pod \"nmstate-webhook-786f45cff4-qtkdc\" (UID: \"5ae61471-c126-4bb0-b7c5-1b56f1686ecc\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.347773 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8lbm\" (UniqueName: \"kubernetes.io/projected/b1e7c059-1db9-417a-8bd9-b5157303f3af-kube-api-access-m8lbm\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.448692 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5ae61471-c126-4bb0-b7c5-1b56f1686ecc-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qtkdc\" (UID: \"5ae61471-c126-4bb0-b7c5-1b56f1686ecc\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.448736 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-dbus-socket\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.448753 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-ovs-socket\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.448775 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.448792 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.448817 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2pwn\" (UniqueName: \"kubernetes.io/projected/5ae61471-c126-4bb0-b7c5-1b56f1686ecc-kube-api-access-n2pwn\") pod \"nmstate-webhook-786f45cff4-qtkdc\" (UID: \"5ae61471-c126-4bb0-b7c5-1b56f1686ecc\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.448845 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-ovs-socket\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.448851 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8lbm\" (UniqueName: \"kubernetes.io/projected/b1e7c059-1db9-417a-8bd9-b5157303f3af-kube-api-access-m8lbm\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.449034 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77gzv\" (UniqueName: \"kubernetes.io/projected/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-kube-api-access-77gzv\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.449052 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-dbus-socket\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.449060 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9xf6\" (UniqueName: \"kubernetes.io/projected/a16c5c73-6515-4d5b-898e-aa6d3940f0b1-kube-api-access-b9xf6\") pod \"nmstate-metrics-69594cc75-64kn2\" (UID: \"a16c5c73-6515-4d5b-898e-aa6d3940f0b1\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.449117 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-nmstate-lock\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.449195 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1e7c059-1db9-417a-8bd9-b5157303f3af-nmstate-lock\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.466686 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5ae61471-c126-4bb0-b7c5-1b56f1686ecc-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qtkdc\" (UID: \"5ae61471-c126-4bb0-b7c5-1b56f1686ecc\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.468130 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9xf6\" (UniqueName: \"kubernetes.io/projected/a16c5c73-6515-4d5b-898e-aa6d3940f0b1-kube-api-access-b9xf6\") pod \"nmstate-metrics-69594cc75-64kn2\" (UID: \"a16c5c73-6515-4d5b-898e-aa6d3940f0b1\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.474658 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.475290 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2pwn\" (UniqueName: \"kubernetes.io/projected/5ae61471-c126-4bb0-b7c5-1b56f1686ecc-kube-api-access-n2pwn\") pod \"nmstate-webhook-786f45cff4-qtkdc\" (UID: \"5ae61471-c126-4bb0-b7c5-1b56f1686ecc\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.476375 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8lbm\" (UniqueName: \"kubernetes.io/projected/b1e7c059-1db9-417a-8bd9-b5157303f3af-kube-api-access-m8lbm\") pod \"nmstate-handler-w8lgm\" (UID: \"b1e7c059-1db9-417a-8bd9-b5157303f3af\") " pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.502153 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.526265 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-754fbd84c4-hrkwq"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.526981 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.547649 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-754fbd84c4-hrkwq"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.550771 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77gzv\" (UniqueName: \"kubernetes.io/projected/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-kube-api-access-77gzv\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.550833 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.550854 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.551755 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.557759 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.567059 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.586961 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77gzv\" (UniqueName: \"kubernetes.io/projected/0b30e3b3-0280-45c0-ad26-00ab9dff49ce-kube-api-access-77gzv\") pod \"nmstate-console-plugin-5dcbbd79cf-dmxhv\" (UID: \"0b30e3b3-0280-45c0-ad26-00ab9dff49ce\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.652076 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-trusted-ca-bundle\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.652449 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-oauth-serving-cert\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.652477 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9htvr\" (UniqueName: \"kubernetes.io/projected/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-kube-api-access-9htvr\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.652508 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-config\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.652528 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-service-ca\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.652584 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-serving-cert\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.652602 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-oauth-config\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.655436 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.754092 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-trusted-ca-bundle\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.754131 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-oauth-serving-cert\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.754160 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9htvr\" (UniqueName: \"kubernetes.io/projected/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-kube-api-access-9htvr\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.754202 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-config\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.754236 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-service-ca\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.754270 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-serving-cert\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.754298 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-oauth-config\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.755048 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-trusted-ca-bundle\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.755931 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-config\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.755923 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-oauth-serving-cert\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.756106 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-service-ca\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.757803 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-oauth-config\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.757979 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-console-serving-cert\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.772995 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9htvr\" (UniqueName: \"kubernetes.io/projected/9718ad6b-c28a-4dd8-b1d6-13cfc72aa470-kube-api-access-9htvr\") pod \"console-754fbd84c4-hrkwq\" (UID: \"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470\") " pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.864259 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.964192 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv"] Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.986383 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc"] Feb 28 13:31:37 crc kubenswrapper[4897]: W0228 13:31:37.990222 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b30e3b3_0280_45c0_ad26_00ab9dff49ce.slice/crio-f8136a2c15de67d97751ea2628898cd1ad05625aaeb1438175a204a7a7757dc4 WatchSource:0}: Error finding container f8136a2c15de67d97751ea2628898cd1ad05625aaeb1438175a204a7a7757dc4: Status 404 returned error can't find the container with id f8136a2c15de67d97751ea2628898cd1ad05625aaeb1438175a204a7a7757dc4 Feb 28 13:31:37 crc kubenswrapper[4897]: I0228 13:31:37.992269 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-64kn2"] Feb 28 13:31:37 crc kubenswrapper[4897]: W0228 13:31:37.994396 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ae61471_c126_4bb0_b7c5_1b56f1686ecc.slice/crio-01cd3205c4f97764fbcda6723604e20ae71aff224eb3876b98f3b2cfce93fa9c WatchSource:0}: Error finding container 01cd3205c4f97764fbcda6723604e20ae71aff224eb3876b98f3b2cfce93fa9c: Status 404 returned error can't find the container with id 01cd3205c4f97764fbcda6723604e20ae71aff224eb3876b98f3b2cfce93fa9c Feb 28 13:31:38 crc kubenswrapper[4897]: W0228 13:31:38.001649 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda16c5c73_6515_4d5b_898e_aa6d3940f0b1.slice/crio-3762ad100f69513601ba83f0c051a308a286927dd724e32a9c83e5642866d533 WatchSource:0}: Error finding container 3762ad100f69513601ba83f0c051a308a286927dd724e32a9c83e5642866d533: Status 404 returned error can't find the container with id 3762ad100f69513601ba83f0c051a308a286927dd724e32a9c83e5642866d533 Feb 28 13:31:38 crc kubenswrapper[4897]: I0228 13:31:38.085836 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-754fbd84c4-hrkwq"] Feb 28 13:31:38 crc kubenswrapper[4897]: I0228 13:31:38.219517 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-754fbd84c4-hrkwq" event={"ID":"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470","Type":"ContainerStarted","Data":"eb72d093d35bc015d3a95206a39ca3d1d5cb0ceb403d2c4803370a3c541b16a9"} Feb 28 13:31:38 crc kubenswrapper[4897]: I0228 13:31:38.219572 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-754fbd84c4-hrkwq" event={"ID":"9718ad6b-c28a-4dd8-b1d6-13cfc72aa470","Type":"ContainerStarted","Data":"3242835ac2e01a2dce3208fc1acfb6d5e761f7402e4f0887233a37c01728019b"} Feb 28 13:31:38 crc kubenswrapper[4897]: I0228 13:31:38.221596 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-w8lgm" event={"ID":"b1e7c059-1db9-417a-8bd9-b5157303f3af","Type":"ContainerStarted","Data":"b045e1773c0dfb6f9e0257dee492c97410c43d7e0afd27a01b179a5fa59b4231"} Feb 28 13:31:38 crc kubenswrapper[4897]: I0228 13:31:38.222954 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" event={"ID":"5ae61471-c126-4bb0-b7c5-1b56f1686ecc","Type":"ContainerStarted","Data":"01cd3205c4f97764fbcda6723604e20ae71aff224eb3876b98f3b2cfce93fa9c"} Feb 28 13:31:38 crc kubenswrapper[4897]: I0228 13:31:38.223996 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" event={"ID":"a16c5c73-6515-4d5b-898e-aa6d3940f0b1","Type":"ContainerStarted","Data":"3762ad100f69513601ba83f0c051a308a286927dd724e32a9c83e5642866d533"} Feb 28 13:31:38 crc kubenswrapper[4897]: I0228 13:31:38.225121 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" event={"ID":"0b30e3b3-0280-45c0-ad26-00ab9dff49ce","Type":"ContainerStarted","Data":"f8136a2c15de67d97751ea2628898cd1ad05625aaeb1438175a204a7a7757dc4"} Feb 28 13:31:41 crc kubenswrapper[4897]: I0228 13:31:41.252360 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" event={"ID":"a16c5c73-6515-4d5b-898e-aa6d3940f0b1","Type":"ContainerStarted","Data":"3eeb7902067d9982baeffbffd7e9827db1d03bc7e3224f5e906decbccab53906"} Feb 28 13:31:41 crc kubenswrapper[4897]: I0228 13:31:41.254508 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-w8lgm" event={"ID":"b1e7c059-1db9-417a-8bd9-b5157303f3af","Type":"ContainerStarted","Data":"9aca9684977a5608ee4109cbdd48ee25093e71d41bd725d3608260df2546aacb"} Feb 28 13:31:41 crc kubenswrapper[4897]: I0228 13:31:41.254569 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:41 crc kubenswrapper[4897]: I0228 13:31:41.260454 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" event={"ID":"5ae61471-c126-4bb0-b7c5-1b56f1686ecc","Type":"ContainerStarted","Data":"55f8ab623e930d9d2de2785235c87a0a3f4084674396d6fd734a5d35de003841"} Feb 28 13:31:41 crc kubenswrapper[4897]: I0228 13:31:41.260905 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:31:41 crc kubenswrapper[4897]: I0228 13:31:41.280844 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-w8lgm" podStartSLOduration=1.809269097 podStartE2EDuration="4.280829392s" podCreationTimestamp="2026-02-28 13:31:37 +0000 UTC" firstStartedPulling="2026-02-28 13:31:37.600627706 +0000 UTC m=+911.842948363" lastFinishedPulling="2026-02-28 13:31:40.072188001 +0000 UTC m=+914.314508658" observedRunningTime="2026-02-28 13:31:41.277487065 +0000 UTC m=+915.519807732" watchObservedRunningTime="2026-02-28 13:31:41.280829392 +0000 UTC m=+915.523150049" Feb 28 13:31:41 crc kubenswrapper[4897]: I0228 13:31:41.282971 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-754fbd84c4-hrkwq" podStartSLOduration=4.282963384 podStartE2EDuration="4.282963384s" podCreationTimestamp="2026-02-28 13:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:31:38.238505799 +0000 UTC m=+912.480826466" watchObservedRunningTime="2026-02-28 13:31:41.282963384 +0000 UTC m=+915.525284041" Feb 28 13:31:41 crc kubenswrapper[4897]: I0228 13:31:41.305882 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" podStartSLOduration=2.18767263 podStartE2EDuration="4.305861201s" podCreationTimestamp="2026-02-28 13:31:37 +0000 UTC" firstStartedPulling="2026-02-28 13:31:37.995838197 +0000 UTC m=+912.238158854" lastFinishedPulling="2026-02-28 13:31:40.114026768 +0000 UTC m=+914.356347425" observedRunningTime="2026-02-28 13:31:41.300951928 +0000 UTC m=+915.543272605" watchObservedRunningTime="2026-02-28 13:31:41.305861201 +0000 UTC m=+915.548181858" Feb 28 13:31:43 crc kubenswrapper[4897]: I0228 13:31:43.274619 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" event={"ID":"a16c5c73-6515-4d5b-898e-aa6d3940f0b1","Type":"ContainerStarted","Data":"a7c11c33330b95e94de7fc22e298e342ccaf0dca6ad464d2a9c1722e409d28e2"} Feb 28 13:31:43 crc kubenswrapper[4897]: I0228 13:31:43.292633 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-64kn2" podStartSLOduration=1.7539648190000001 podStartE2EDuration="6.292606537s" podCreationTimestamp="2026-02-28 13:31:37 +0000 UTC" firstStartedPulling="2026-02-28 13:31:38.004680834 +0000 UTC m=+912.247001491" lastFinishedPulling="2026-02-28 13:31:42.543322552 +0000 UTC m=+916.785643209" observedRunningTime="2026-02-28 13:31:43.289038423 +0000 UTC m=+917.531359080" watchObservedRunningTime="2026-02-28 13:31:43.292606537 +0000 UTC m=+917.534927194" Feb 28 13:31:47 crc kubenswrapper[4897]: I0228 13:31:47.611499 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-w8lgm" Feb 28 13:31:47 crc kubenswrapper[4897]: I0228 13:31:47.864470 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:47 crc kubenswrapper[4897]: I0228 13:31:47.865288 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:47 crc kubenswrapper[4897]: I0228 13:31:47.870077 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:48 crc kubenswrapper[4897]: I0228 13:31:48.313239 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-754fbd84c4-hrkwq" Feb 28 13:31:48 crc kubenswrapper[4897]: I0228 13:31:48.450979 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rd9tl"] Feb 28 13:31:57 crc kubenswrapper[4897]: I0228 13:31:57.376594 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" event={"ID":"0b30e3b3-0280-45c0-ad26-00ab9dff49ce","Type":"ContainerStarted","Data":"a4fa9509440e9c4d2e8df086d1ab62fd0f4e19b13594a663906d5e5b0b361464"} Feb 28 13:31:57 crc kubenswrapper[4897]: I0228 13:31:57.398011 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-dmxhv" podStartSLOduration=2.026103209 podStartE2EDuration="20.397989223s" podCreationTimestamp="2026-02-28 13:31:37 +0000 UTC" firstStartedPulling="2026-02-28 13:31:37.994316193 +0000 UTC m=+912.236636850" lastFinishedPulling="2026-02-28 13:31:56.366202207 +0000 UTC m=+930.608522864" observedRunningTime="2026-02-28 13:31:57.392515224 +0000 UTC m=+931.634835951" watchObservedRunningTime="2026-02-28 13:31:57.397989223 +0000 UTC m=+931.640309900" Feb 28 13:31:57 crc kubenswrapper[4897]: I0228 13:31:57.511422 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtkdc" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.148287 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538092-vc49k"] Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.149863 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538092-vc49k" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.151927 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.152657 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.157591 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.165426 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zm7l\" (UniqueName: \"kubernetes.io/projected/ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0-kube-api-access-5zm7l\") pod \"auto-csr-approver-29538092-vc49k\" (UID: \"ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0\") " pod="openshift-infra/auto-csr-approver-29538092-vc49k" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.167063 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538092-vc49k"] Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.266612 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zm7l\" (UniqueName: \"kubernetes.io/projected/ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0-kube-api-access-5zm7l\") pod \"auto-csr-approver-29538092-vc49k\" (UID: \"ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0\") " pod="openshift-infra/auto-csr-approver-29538092-vc49k" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.293427 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zm7l\" (UniqueName: \"kubernetes.io/projected/ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0-kube-api-access-5zm7l\") pod \"auto-csr-approver-29538092-vc49k\" (UID: \"ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0\") " pod="openshift-infra/auto-csr-approver-29538092-vc49k" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.468545 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538092-vc49k" Feb 28 13:32:00 crc kubenswrapper[4897]: I0228 13:32:00.921134 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538092-vc49k"] Feb 28 13:32:01 crc kubenswrapper[4897]: I0228 13:32:01.418865 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538092-vc49k" event={"ID":"ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0","Type":"ContainerStarted","Data":"0873363e6548fcd7b8e54a679865ebb6ec984e9edf58bec3153e093180b68a1a"} Feb 28 13:32:03 crc kubenswrapper[4897]: I0228 13:32:03.432517 4897 generic.go:334] "Generic (PLEG): container finished" podID="ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0" containerID="19ca9a4e3f0d6e021374b6ad375834aae2c27eed449266345b1ef375f452fbf6" exitCode=0 Feb 28 13:32:03 crc kubenswrapper[4897]: I0228 13:32:03.432568 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538092-vc49k" event={"ID":"ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0","Type":"ContainerDied","Data":"19ca9a4e3f0d6e021374b6ad375834aae2c27eed449266345b1ef375f452fbf6"} Feb 28 13:32:04 crc kubenswrapper[4897]: I0228 13:32:04.682423 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538092-vc49k" Feb 28 13:32:04 crc kubenswrapper[4897]: I0228 13:32:04.825613 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zm7l\" (UniqueName: \"kubernetes.io/projected/ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0-kube-api-access-5zm7l\") pod \"ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0\" (UID: \"ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0\") " Feb 28 13:32:04 crc kubenswrapper[4897]: I0228 13:32:04.831549 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0-kube-api-access-5zm7l" (OuterVolumeSpecName: "kube-api-access-5zm7l") pod "ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0" (UID: "ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0"). InnerVolumeSpecName "kube-api-access-5zm7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:32:04 crc kubenswrapper[4897]: I0228 13:32:04.927695 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zm7l\" (UniqueName: \"kubernetes.io/projected/ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0-kube-api-access-5zm7l\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:05 crc kubenswrapper[4897]: I0228 13:32:05.447669 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538092-vc49k" event={"ID":"ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0","Type":"ContainerDied","Data":"0873363e6548fcd7b8e54a679865ebb6ec984e9edf58bec3153e093180b68a1a"} Feb 28 13:32:05 crc kubenswrapper[4897]: I0228 13:32:05.447739 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0873363e6548fcd7b8e54a679865ebb6ec984e9edf58bec3153e093180b68a1a" Feb 28 13:32:05 crc kubenswrapper[4897]: I0228 13:32:05.447741 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538092-vc49k" Feb 28 13:32:05 crc kubenswrapper[4897]: I0228 13:32:05.782053 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538086-qkgqc"] Feb 28 13:32:05 crc kubenswrapper[4897]: I0228 13:32:05.787132 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538086-qkgqc"] Feb 28 13:32:06 crc kubenswrapper[4897]: I0228 13:32:06.498780 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480b2ad8-c8f7-479c-850b-c49aae2ed568" path="/var/lib/kubelet/pods/480b2ad8-c8f7-479c-850b-c49aae2ed568/volumes" Feb 28 13:32:12 crc kubenswrapper[4897]: I0228 13:32:12.931503 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5"] Feb 28 13:32:12 crc kubenswrapper[4897]: E0228 13:32:12.932138 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0" containerName="oc" Feb 28 13:32:12 crc kubenswrapper[4897]: I0228 13:32:12.932150 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0" containerName="oc" Feb 28 13:32:12 crc kubenswrapper[4897]: I0228 13:32:12.932266 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0" containerName="oc" Feb 28 13:32:12 crc kubenswrapper[4897]: I0228 13:32:12.933023 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:12 crc kubenswrapper[4897]: I0228 13:32:12.934977 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 13:32:12 crc kubenswrapper[4897]: I0228 13:32:12.950499 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5"] Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.035054 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.035105 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xh8p\" (UniqueName: \"kubernetes.io/projected/c76ed8b8-228d-4263-addb-9571183ab82d-kube-api-access-5xh8p\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.035163 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.136212 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.136327 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.136355 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xh8p\" (UniqueName: \"kubernetes.io/projected/c76ed8b8-228d-4263-addb-9571183ab82d-kube-api-access-5xh8p\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.136716 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.137015 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.170935 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xh8p\" (UniqueName: \"kubernetes.io/projected/c76ed8b8-228d-4263-addb-9571183ab82d-kube-api-access-5xh8p\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.249070 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.485056 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rd9tl" podUID="3423cf07-c57b-41f3-82da-f497649699db" containerName="console" containerID="cri-o://e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077" gracePeriod=15 Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.522107 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5"] Feb 28 13:32:13 crc kubenswrapper[4897]: W0228 13:32:13.582157 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc76ed8b8_228d_4263_addb_9571183ab82d.slice/crio-37fa50eb83f2eb75fde89b3872de9258dcaae0ab7829bd0436a61f0103d171e3 WatchSource:0}: Error finding container 37fa50eb83f2eb75fde89b3872de9258dcaae0ab7829bd0436a61f0103d171e3: Status 404 returned error can't find the container with id 37fa50eb83f2eb75fde89b3872de9258dcaae0ab7829bd0436a61f0103d171e3 Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.840028 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rd9tl_3423cf07-c57b-41f3-82da-f497649699db/console/0.log" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.840329 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.950874 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-oauth-config\") pod \"3423cf07-c57b-41f3-82da-f497649699db\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.951237 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-service-ca\") pod \"3423cf07-c57b-41f3-82da-f497649699db\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.951272 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4km6\" (UniqueName: \"kubernetes.io/projected/3423cf07-c57b-41f3-82da-f497649699db-kube-api-access-t4km6\") pod \"3423cf07-c57b-41f3-82da-f497649699db\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.951349 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-trusted-ca-bundle\") pod \"3423cf07-c57b-41f3-82da-f497649699db\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.951393 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-console-config\") pod \"3423cf07-c57b-41f3-82da-f497649699db\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.951432 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-oauth-serving-cert\") pod \"3423cf07-c57b-41f3-82da-f497649699db\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.951487 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-serving-cert\") pod \"3423cf07-c57b-41f3-82da-f497649699db\" (UID: \"3423cf07-c57b-41f3-82da-f497649699db\") " Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.952177 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3423cf07-c57b-41f3-82da-f497649699db" (UID: "3423cf07-c57b-41f3-82da-f497649699db"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.952169 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-console-config" (OuterVolumeSpecName: "console-config") pod "3423cf07-c57b-41f3-82da-f497649699db" (UID: "3423cf07-c57b-41f3-82da-f497649699db"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.952218 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-service-ca" (OuterVolumeSpecName: "service-ca") pod "3423cf07-c57b-41f3-82da-f497649699db" (UID: "3423cf07-c57b-41f3-82da-f497649699db"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.952263 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3423cf07-c57b-41f3-82da-f497649699db" (UID: "3423cf07-c57b-41f3-82da-f497649699db"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.956245 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3423cf07-c57b-41f3-82da-f497649699db" (UID: "3423cf07-c57b-41f3-82da-f497649699db"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.956521 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3423cf07-c57b-41f3-82da-f497649699db" (UID: "3423cf07-c57b-41f3-82da-f497649699db"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:32:13 crc kubenswrapper[4897]: I0228 13:32:13.956882 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3423cf07-c57b-41f3-82da-f497649699db-kube-api-access-t4km6" (OuterVolumeSpecName: "kube-api-access-t4km6") pod "3423cf07-c57b-41f3-82da-f497649699db" (UID: "3423cf07-c57b-41f3-82da-f497649699db"). InnerVolumeSpecName "kube-api-access-t4km6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.053545 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.053601 4897 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-console-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.053621 4897 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.053641 4897 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.053658 4897 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3423cf07-c57b-41f3-82da-f497649699db-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.053675 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3423cf07-c57b-41f3-82da-f497649699db-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.053696 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4km6\" (UniqueName: \"kubernetes.io/projected/3423cf07-c57b-41f3-82da-f497649699db-kube-api-access-t4km6\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.511224 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rd9tl_3423cf07-c57b-41f3-82da-f497649699db/console/0.log" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.511347 4897 generic.go:334] "Generic (PLEG): container finished" podID="3423cf07-c57b-41f3-82da-f497649699db" containerID="e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077" exitCode=2 Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.511481 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rd9tl" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.511499 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rd9tl" event={"ID":"3423cf07-c57b-41f3-82da-f497649699db","Type":"ContainerDied","Data":"e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077"} Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.511621 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rd9tl" event={"ID":"3423cf07-c57b-41f3-82da-f497649699db","Type":"ContainerDied","Data":"351f8404c82b5d60438845f8e04653de30fbb6cd608363c4eb28eae7d8a6807c"} Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.511697 4897 scope.go:117] "RemoveContainer" containerID="e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.514632 4897 generic.go:334] "Generic (PLEG): container finished" podID="c76ed8b8-228d-4263-addb-9571183ab82d" containerID="aafcd29229b15b6a7a1085d9e48dabeb43cd3c087b82ef2ea737cf5e96dab3a1" exitCode=0 Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.514688 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" event={"ID":"c76ed8b8-228d-4263-addb-9571183ab82d","Type":"ContainerDied","Data":"aafcd29229b15b6a7a1085d9e48dabeb43cd3c087b82ef2ea737cf5e96dab3a1"} Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.514728 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" event={"ID":"c76ed8b8-228d-4263-addb-9571183ab82d","Type":"ContainerStarted","Data":"37fa50eb83f2eb75fde89b3872de9258dcaae0ab7829bd0436a61f0103d171e3"} Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.544425 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rd9tl"] Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.548336 4897 scope.go:117] "RemoveContainer" containerID="e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.551332 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rd9tl"] Feb 28 13:32:14 crc kubenswrapper[4897]: E0228 13:32:14.551745 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077\": container with ID starting with e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077 not found: ID does not exist" containerID="e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077" Feb 28 13:32:14 crc kubenswrapper[4897]: I0228 13:32:14.551820 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077"} err="failed to get container status \"e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077\": rpc error: code = NotFound desc = could not find container \"e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077\": container with ID starting with e7cd43564f47d2ab118b814b2ae07964f30495d30981a921b9ec8508920d7077 not found: ID does not exist" Feb 28 13:32:16 crc kubenswrapper[4897]: I0228 13:32:16.469062 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3423cf07-c57b-41f3-82da-f497649699db" path="/var/lib/kubelet/pods/3423cf07-c57b-41f3-82da-f497649699db/volumes" Feb 28 13:32:16 crc kubenswrapper[4897]: I0228 13:32:16.540630 4897 generic.go:334] "Generic (PLEG): container finished" podID="c76ed8b8-228d-4263-addb-9571183ab82d" containerID="739dc34778b569bee3696fad7d84a21ac8a837ef04354822639d7040f1860fba" exitCode=0 Feb 28 13:32:16 crc kubenswrapper[4897]: I0228 13:32:16.540715 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" event={"ID":"c76ed8b8-228d-4263-addb-9571183ab82d","Type":"ContainerDied","Data":"739dc34778b569bee3696fad7d84a21ac8a837ef04354822639d7040f1860fba"} Feb 28 13:32:17 crc kubenswrapper[4897]: I0228 13:32:17.552203 4897 generic.go:334] "Generic (PLEG): container finished" podID="c76ed8b8-228d-4263-addb-9571183ab82d" containerID="194de40caddb05c55da6e81e9aabe79d5e3e06bf3dd66644ae0429666bc61844" exitCode=0 Feb 28 13:32:17 crc kubenswrapper[4897]: I0228 13:32:17.552260 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" event={"ID":"c76ed8b8-228d-4263-addb-9571183ab82d","Type":"ContainerDied","Data":"194de40caddb05c55da6e81e9aabe79d5e3e06bf3dd66644ae0429666bc61844"} Feb 28 13:32:18 crc kubenswrapper[4897]: I0228 13:32:18.891991 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.020584 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xh8p\" (UniqueName: \"kubernetes.io/projected/c76ed8b8-228d-4263-addb-9571183ab82d-kube-api-access-5xh8p\") pod \"c76ed8b8-228d-4263-addb-9571183ab82d\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.020675 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-util\") pod \"c76ed8b8-228d-4263-addb-9571183ab82d\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.020733 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-bundle\") pod \"c76ed8b8-228d-4263-addb-9571183ab82d\" (UID: \"c76ed8b8-228d-4263-addb-9571183ab82d\") " Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.022714 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-bundle" (OuterVolumeSpecName: "bundle") pod "c76ed8b8-228d-4263-addb-9571183ab82d" (UID: "c76ed8b8-228d-4263-addb-9571183ab82d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.031466 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c76ed8b8-228d-4263-addb-9571183ab82d-kube-api-access-5xh8p" (OuterVolumeSpecName: "kube-api-access-5xh8p") pod "c76ed8b8-228d-4263-addb-9571183ab82d" (UID: "c76ed8b8-228d-4263-addb-9571183ab82d"). InnerVolumeSpecName "kube-api-access-5xh8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.051019 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-util" (OuterVolumeSpecName: "util") pod "c76ed8b8-228d-4263-addb-9571183ab82d" (UID: "c76ed8b8-228d-4263-addb-9571183ab82d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.122239 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-util\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.122274 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c76ed8b8-228d-4263-addb-9571183ab82d-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.122283 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xh8p\" (UniqueName: \"kubernetes.io/projected/c76ed8b8-228d-4263-addb-9571183ab82d-kube-api-access-5xh8p\") on node \"crc\" DevicePath \"\"" Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.573432 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" event={"ID":"c76ed8b8-228d-4263-addb-9571183ab82d","Type":"ContainerDied","Data":"37fa50eb83f2eb75fde89b3872de9258dcaae0ab7829bd0436a61f0103d171e3"} Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.573807 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fa50eb83f2eb75fde89b3872de9258dcaae0ab7829bd0436a61f0103d171e3" Feb 28 13:32:19 crc kubenswrapper[4897]: I0228 13:32:19.573480 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5" Feb 28 13:32:27 crc kubenswrapper[4897]: I0228 13:32:27.406468 4897 scope.go:117] "RemoveContainer" containerID="9318decd20936c1121212d230c92e977ffa0c5aa0bffb21002d843fac853b8bb" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.104450 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr"] Feb 28 13:32:30 crc kubenswrapper[4897]: E0228 13:32:30.105162 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76ed8b8-228d-4263-addb-9571183ab82d" containerName="extract" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.105174 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76ed8b8-228d-4263-addb-9571183ab82d" containerName="extract" Feb 28 13:32:30 crc kubenswrapper[4897]: E0228 13:32:30.105189 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76ed8b8-228d-4263-addb-9571183ab82d" containerName="util" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.105195 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76ed8b8-228d-4263-addb-9571183ab82d" containerName="util" Feb 28 13:32:30 crc kubenswrapper[4897]: E0228 13:32:30.105205 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76ed8b8-228d-4263-addb-9571183ab82d" containerName="pull" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.105211 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76ed8b8-228d-4263-addb-9571183ab82d" containerName="pull" Feb 28 13:32:30 crc kubenswrapper[4897]: E0228 13:32:30.105218 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3423cf07-c57b-41f3-82da-f497649699db" containerName="console" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.105224 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3423cf07-c57b-41f3-82da-f497649699db" containerName="console" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.105332 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3423cf07-c57b-41f3-82da-f497649699db" containerName="console" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.105348 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c76ed8b8-228d-4263-addb-9571183ab82d" containerName="extract" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.105729 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.108397 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.108661 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.108781 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.108837 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hx99f" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.109120 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.122522 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr"] Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.173599 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-webhook-cert\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.173648 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbdv\" (UniqueName: \"kubernetes.io/projected/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-kube-api-access-qfbdv\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.173713 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-apiservice-cert\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.274929 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-webhook-cert\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.274978 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbdv\" (UniqueName: \"kubernetes.io/projected/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-kube-api-access-qfbdv\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.275038 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-apiservice-cert\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.279975 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-apiservice-cert\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.280592 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-webhook-cert\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.293963 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbdv\" (UniqueName: \"kubernetes.io/projected/1c3404c1-8c8b-4cf9-89dd-8f370ad776e2-kube-api-access-qfbdv\") pod \"metallb-operator-controller-manager-7996b9d6bf-xmdxr\" (UID: \"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2\") " pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.363489 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x"] Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.364176 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.365963 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.368766 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6zjr2" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.368769 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.381222 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x"] Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.419067 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.477590 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-webhook-cert\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.477638 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-apiservice-cert\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.477707 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxh5w\" (UniqueName: \"kubernetes.io/projected/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-kube-api-access-kxh5w\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.578404 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-apiservice-cert\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.578518 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxh5w\" (UniqueName: \"kubernetes.io/projected/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-kube-api-access-kxh5w\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.578571 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-webhook-cert\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.585401 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-webhook-cert\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.587200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-apiservice-cert\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.613597 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxh5w\" (UniqueName: \"kubernetes.io/projected/3efe124f-7df2-4c2b-ad84-f8674f4d4fb8-kube-api-access-kxh5w\") pod \"metallb-operator-webhook-server-cc84c5f94-tk95x\" (UID: \"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8\") " pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.678799 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.869852 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr"] Feb 28 13:32:30 crc kubenswrapper[4897]: W0228 13:32:30.877437 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c3404c1_8c8b_4cf9_89dd_8f370ad776e2.slice/crio-e9070a383c44a9fae52f28b1b1435ed3adf9523456babec96cd865193153f368 WatchSource:0}: Error finding container e9070a383c44a9fae52f28b1b1435ed3adf9523456babec96cd865193153f368: Status 404 returned error can't find the container with id e9070a383c44a9fae52f28b1b1435ed3adf9523456babec96cd865193153f368 Feb 28 13:32:30 crc kubenswrapper[4897]: I0228 13:32:30.918723 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x"] Feb 28 13:32:30 crc kubenswrapper[4897]: W0228 13:32:30.919155 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3efe124f_7df2_4c2b_ad84_f8674f4d4fb8.slice/crio-e4aaa1807836dcaeaaf11c02e0caf8076c3c6514b6f88a47ce85ee4a1fbaea0d WatchSource:0}: Error finding container e4aaa1807836dcaeaaf11c02e0caf8076c3c6514b6f88a47ce85ee4a1fbaea0d: Status 404 returned error can't find the container with id e4aaa1807836dcaeaaf11c02e0caf8076c3c6514b6f88a47ce85ee4a1fbaea0d Feb 28 13:32:31 crc kubenswrapper[4897]: I0228 13:32:31.665793 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" event={"ID":"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8","Type":"ContainerStarted","Data":"e4aaa1807836dcaeaaf11c02e0caf8076c3c6514b6f88a47ce85ee4a1fbaea0d"} Feb 28 13:32:31 crc kubenswrapper[4897]: I0228 13:32:31.666759 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" event={"ID":"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2","Type":"ContainerStarted","Data":"e9070a383c44a9fae52f28b1b1435ed3adf9523456babec96cd865193153f368"} Feb 28 13:32:33 crc kubenswrapper[4897]: I0228 13:32:33.370619 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:32:33 crc kubenswrapper[4897]: I0228 13:32:33.370938 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:32:34 crc kubenswrapper[4897]: I0228 13:32:34.685812 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" event={"ID":"1c3404c1-8c8b-4cf9-89dd-8f370ad776e2","Type":"ContainerStarted","Data":"f543d326f9230be02bc9c96c216079ac8e3db5593768d8c9007857065e70a1b3"} Feb 28 13:32:34 crc kubenswrapper[4897]: I0228 13:32:34.686062 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:32:36 crc kubenswrapper[4897]: I0228 13:32:36.481950 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" podStartSLOduration=3.308096902 podStartE2EDuration="6.481928432s" podCreationTimestamp="2026-02-28 13:32:30 +0000 UTC" firstStartedPulling="2026-02-28 13:32:30.879984242 +0000 UTC m=+965.122304899" lastFinishedPulling="2026-02-28 13:32:34.053815762 +0000 UTC m=+968.296136429" observedRunningTime="2026-02-28 13:32:34.709604886 +0000 UTC m=+968.951925543" watchObservedRunningTime="2026-02-28 13:32:36.481928432 +0000 UTC m=+970.724249099" Feb 28 13:32:37 crc kubenswrapper[4897]: I0228 13:32:37.705529 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" event={"ID":"3efe124f-7df2-4c2b-ad84-f8674f4d4fb8","Type":"ContainerStarted","Data":"f8ee7132b9797cdc1c0cce910ee9464f495d9927036d6638823272c925469f9e"} Feb 28 13:32:37 crc kubenswrapper[4897]: I0228 13:32:37.705942 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:32:37 crc kubenswrapper[4897]: I0228 13:32:37.729761 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" podStartSLOduration=1.551788642 podStartE2EDuration="7.729733304s" podCreationTimestamp="2026-02-28 13:32:30 +0000 UTC" firstStartedPulling="2026-02-28 13:32:30.921709166 +0000 UTC m=+965.164029823" lastFinishedPulling="2026-02-28 13:32:37.099653818 +0000 UTC m=+971.341974485" observedRunningTime="2026-02-28 13:32:37.723578255 +0000 UTC m=+971.965898952" watchObservedRunningTime="2026-02-28 13:32:37.729733304 +0000 UTC m=+971.972053981" Feb 28 13:32:50 crc kubenswrapper[4897]: I0228 13:32:50.682982 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-cc84c5f94-tk95x" Feb 28 13:33:03 crc kubenswrapper[4897]: I0228 13:33:03.371362 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:33:03 crc kubenswrapper[4897]: I0228 13:33:03.372185 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.450907 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zc4hw"] Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.452490 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.481744 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zc4hw"] Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.492873 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-utilities\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.493009 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-catalog-content\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.493084 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsdn\" (UniqueName: \"kubernetes.io/projected/8aa66059-af3c-4c52-b26d-967dee7b208a-kube-api-access-tmsdn\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.595286 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-utilities\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.594608 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-utilities\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.595476 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-catalog-content\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.595879 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-catalog-content\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.595979 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmsdn\" (UniqueName: \"kubernetes.io/projected/8aa66059-af3c-4c52-b26d-967dee7b208a-kube-api-access-tmsdn\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.622197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmsdn\" (UniqueName: \"kubernetes.io/projected/8aa66059-af3c-4c52-b26d-967dee7b208a-kube-api-access-tmsdn\") pod \"certified-operators-zc4hw\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:08 crc kubenswrapper[4897]: I0228 13:33:08.784269 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:09 crc kubenswrapper[4897]: I0228 13:33:09.136862 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zc4hw"] Feb 28 13:33:09 crc kubenswrapper[4897]: I0228 13:33:09.990060 4897 generic.go:334] "Generic (PLEG): container finished" podID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerID="39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127" exitCode=0 Feb 28 13:33:09 crc kubenswrapper[4897]: I0228 13:33:09.990176 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc4hw" event={"ID":"8aa66059-af3c-4c52-b26d-967dee7b208a","Type":"ContainerDied","Data":"39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127"} Feb 28 13:33:09 crc kubenswrapper[4897]: I0228 13:33:09.992958 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc4hw" event={"ID":"8aa66059-af3c-4c52-b26d-967dee7b208a","Type":"ContainerStarted","Data":"af3997c7cb743167c5fdea7858aa467678880cdfbd0dcac071d31932c3e136f8"} Feb 28 13:33:10 crc kubenswrapper[4897]: I0228 13:33:10.421601 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7996b9d6bf-xmdxr" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.003900 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc4hw" event={"ID":"8aa66059-af3c-4c52-b26d-967dee7b208a","Type":"ContainerStarted","Data":"8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a"} Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.244577 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz"] Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.245263 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.246801 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.250186 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-svstl" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.263864 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mct2w"] Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.270984 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.274084 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.274157 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz"] Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.274275 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.333634 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xqdlt"] Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.334499 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.336176 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.336365 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qpf7c" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.337017 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.337114 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.357689 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-jz56q"] Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.358647 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.361351 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.371006 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-jz56q"] Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.433891 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.433968 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2c7\" (UniqueName: \"kubernetes.io/projected/6019677c-387b-4cb8-9c0f-4607f2b5971c-kube-api-access-gv2c7\") pod \"frr-k8s-webhook-server-7f989f654f-m4dnz\" (UID: \"6019677c-387b-4cb8-9c0f-4607f2b5971c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434007 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6019677c-387b-4cb8-9c0f-4607f2b5971c-cert\") pod \"frr-k8s-webhook-server-7f989f654f-m4dnz\" (UID: \"6019677c-387b-4cb8-9c0f-4607f2b5971c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434073 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-sockets\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434124 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f599a5af-52e7-429e-9159-2959003096c7-metallb-excludel2\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434155 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-reloader\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434190 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-conf\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434215 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-metrics-certs\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434254 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whz6h\" (UniqueName: \"kubernetes.io/projected/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-kube-api-access-whz6h\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434287 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9cb\" (UniqueName: \"kubernetes.io/projected/f599a5af-52e7-429e-9159-2959003096c7-kube-api-access-jn9cb\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434346 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-metrics-certs\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434448 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-startup\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.434492 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-metrics\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536131 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-startup\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536181 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp5p\" (UniqueName: \"kubernetes.io/projected/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-kube-api-access-qvp5p\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536212 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-metrics\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536236 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536254 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv2c7\" (UniqueName: \"kubernetes.io/projected/6019677c-387b-4cb8-9c0f-4607f2b5971c-kube-api-access-gv2c7\") pod \"frr-k8s-webhook-server-7f989f654f-m4dnz\" (UID: \"6019677c-387b-4cb8-9c0f-4607f2b5971c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536272 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-cert\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536295 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6019677c-387b-4cb8-9c0f-4607f2b5971c-cert\") pod \"frr-k8s-webhook-server-7f989f654f-m4dnz\" (UID: \"6019677c-387b-4cb8-9c0f-4607f2b5971c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536326 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-sockets\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536353 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f599a5af-52e7-429e-9159-2959003096c7-metallb-excludel2\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536369 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-reloader\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536386 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-conf\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536411 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-metrics-certs\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536432 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whz6h\" (UniqueName: \"kubernetes.io/projected/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-kube-api-access-whz6h\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536448 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9cb\" (UniqueName: \"kubernetes.io/projected/f599a5af-52e7-429e-9159-2959003096c7-kube-api-access-jn9cb\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536469 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-metrics-certs\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536486 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-metrics-certs\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: E0228 13:33:11.536547 4897 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 28 13:33:11 crc kubenswrapper[4897]: E0228 13:33:11.536624 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist podName:f599a5af-52e7-429e-9159-2959003096c7 nodeName:}" failed. No retries permitted until 2026-02-28 13:33:12.036599465 +0000 UTC m=+1006.278920162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist") pod "speaker-xqdlt" (UID: "f599a5af-52e7-429e-9159-2959003096c7") : secret "metallb-memberlist" not found Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.536884 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-metrics\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.537561 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-conf\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.537599 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-startup\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.537795 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-reloader\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.537876 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f599a5af-52e7-429e-9159-2959003096c7-metallb-excludel2\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.537941 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-frr-sockets\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.541651 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6019677c-387b-4cb8-9c0f-4607f2b5971c-cert\") pod \"frr-k8s-webhook-server-7f989f654f-m4dnz\" (UID: \"6019677c-387b-4cb8-9c0f-4607f2b5971c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.542876 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-metrics-certs\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.562928 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whz6h\" (UniqueName: \"kubernetes.io/projected/02f6fadd-b5a9-4d44-aba2-303ab05f15c6-kube-api-access-whz6h\") pod \"frr-k8s-mct2w\" (UID: \"02f6fadd-b5a9-4d44-aba2-303ab05f15c6\") " pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.563030 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-metrics-certs\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.564139 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9cb\" (UniqueName: \"kubernetes.io/projected/f599a5af-52e7-429e-9159-2959003096c7-kube-api-access-jn9cb\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.570331 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv2c7\" (UniqueName: \"kubernetes.io/projected/6019677c-387b-4cb8-9c0f-4607f2b5971c-kube-api-access-gv2c7\") pod \"frr-k8s-webhook-server-7f989f654f-m4dnz\" (UID: \"6019677c-387b-4cb8-9c0f-4607f2b5971c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.628537 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.638299 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp5p\" (UniqueName: \"kubernetes.io/projected/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-kube-api-access-qvp5p\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.638443 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-cert\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.638558 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-metrics-certs\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.643628 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-cert\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.643862 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-metrics-certs\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.667618 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp5p\" (UniqueName: \"kubernetes.io/projected/5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce-kube-api-access-qvp5p\") pod \"controller-86ddb6bd46-jz56q\" (UID: \"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce\") " pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.672330 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:11 crc kubenswrapper[4897]: I0228 13:33:11.859481 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:12 crc kubenswrapper[4897]: I0228 13:33:12.010088 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerStarted","Data":"90da7c62deabf5731579e763d927ccdbf6639fbbd14dca5e21e2decd8d31f126"} Feb 28 13:33:12 crc kubenswrapper[4897]: I0228 13:33:12.011721 4897 generic.go:334] "Generic (PLEG): container finished" podID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerID="8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a" exitCode=0 Feb 28 13:33:12 crc kubenswrapper[4897]: I0228 13:33:12.011767 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc4hw" event={"ID":"8aa66059-af3c-4c52-b26d-967dee7b208a","Type":"ContainerDied","Data":"8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a"} Feb 28 13:33:12 crc kubenswrapper[4897]: I0228 13:33:12.045986 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:12 crc kubenswrapper[4897]: E0228 13:33:12.046165 4897 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 28 13:33:12 crc kubenswrapper[4897]: E0228 13:33:12.046256 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist podName:f599a5af-52e7-429e-9159-2959003096c7 nodeName:}" failed. No retries permitted until 2026-02-28 13:33:13.046234126 +0000 UTC m=+1007.288554793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist") pod "speaker-xqdlt" (UID: "f599a5af-52e7-429e-9159-2959003096c7") : secret "metallb-memberlist" not found Feb 28 13:33:12 crc kubenswrapper[4897]: I0228 13:33:12.148457 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-jz56q"] Feb 28 13:33:12 crc kubenswrapper[4897]: W0228 13:33:12.150652 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a9e2956_4bb5_4986_a8f0_a1a5bfd230ce.slice/crio-d4257f690c5d547a13b1966b15ea8c37529815f12bfe618031e1e49cb0a930bf WatchSource:0}: Error finding container d4257f690c5d547a13b1966b15ea8c37529815f12bfe618031e1e49cb0a930bf: Status 404 returned error can't find the container with id d4257f690c5d547a13b1966b15ea8c37529815f12bfe618031e1e49cb0a930bf Feb 28 13:33:12 crc kubenswrapper[4897]: I0228 13:33:12.269340 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz"] Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.018078 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-jz56q" event={"ID":"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce","Type":"ContainerStarted","Data":"81f324f7fd7b7722a5ca7917b79b2d5baeaec6bb22c9bfcf4a3dd82a3fcae581"} Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.018117 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-jz56q" event={"ID":"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce","Type":"ContainerStarted","Data":"fa7715f23ff86f7da88b1e79823ae55cd51ae8fd8fc418d49bdab35b0db9a15b"} Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.018128 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-jz56q" event={"ID":"5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce","Type":"ContainerStarted","Data":"d4257f690c5d547a13b1966b15ea8c37529815f12bfe618031e1e49cb0a930bf"} Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.018179 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.018908 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" event={"ID":"6019677c-387b-4cb8-9c0f-4607f2b5971c","Type":"ContainerStarted","Data":"230e0e8a5a05103399b8b26b962980aee9a9a3dc5214b164b93f85de158ac752"} Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.033844 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-jz56q" podStartSLOduration=2.033827046 podStartE2EDuration="2.033827046s" podCreationTimestamp="2026-02-28 13:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:33:13.030612842 +0000 UTC m=+1007.272933519" watchObservedRunningTime="2026-02-28 13:33:13.033827046 +0000 UTC m=+1007.276147703" Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.061655 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.069872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f599a5af-52e7-429e-9159-2959003096c7-memberlist\") pod \"speaker-xqdlt\" (UID: \"f599a5af-52e7-429e-9159-2959003096c7\") " pod="metallb-system/speaker-xqdlt" Feb 28 13:33:13 crc kubenswrapper[4897]: I0228 13:33:13.149667 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xqdlt" Feb 28 13:33:14 crc kubenswrapper[4897]: I0228 13:33:14.045482 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xqdlt" event={"ID":"f599a5af-52e7-429e-9159-2959003096c7","Type":"ContainerStarted","Data":"e6cec3cc85f086f3ea88c60af243b6b47359f617ea0fbcdc376fca81513d8ade"} Feb 28 13:33:14 crc kubenswrapper[4897]: I0228 13:33:14.045788 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xqdlt" event={"ID":"f599a5af-52e7-429e-9159-2959003096c7","Type":"ContainerStarted","Data":"0bc244a22ae647676017923ce5fcffa002f64e6ec2fd4522b364dae27eb156c6"} Feb 28 13:33:14 crc kubenswrapper[4897]: I0228 13:33:14.049068 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc4hw" event={"ID":"8aa66059-af3c-4c52-b26d-967dee7b208a","Type":"ContainerStarted","Data":"e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3"} Feb 28 13:33:14 crc kubenswrapper[4897]: I0228 13:33:14.072853 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zc4hw" podStartSLOduration=2.956702929 podStartE2EDuration="6.072838821s" podCreationTimestamp="2026-02-28 13:33:08 +0000 UTC" firstStartedPulling="2026-02-28 13:33:09.992248944 +0000 UTC m=+1004.234569601" lastFinishedPulling="2026-02-28 13:33:13.108384836 +0000 UTC m=+1007.350705493" observedRunningTime="2026-02-28 13:33:14.069277858 +0000 UTC m=+1008.311598515" watchObservedRunningTime="2026-02-28 13:33:14.072838821 +0000 UTC m=+1008.315159478" Feb 28 13:33:15 crc kubenswrapper[4897]: I0228 13:33:15.072695 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xqdlt" event={"ID":"f599a5af-52e7-429e-9159-2959003096c7","Type":"ContainerStarted","Data":"c89cbd0a01562b669699b9f6da588444ba89c394d6a2b87fd8b0e70a5d4b7fed"} Feb 28 13:33:15 crc kubenswrapper[4897]: I0228 13:33:15.118242 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xqdlt" podStartSLOduration=4.118217663 podStartE2EDuration="4.118217663s" podCreationTimestamp="2026-02-28 13:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:33:15.114502175 +0000 UTC m=+1009.356822842" watchObservedRunningTime="2026-02-28 13:33:15.118217663 +0000 UTC m=+1009.360538320" Feb 28 13:33:16 crc kubenswrapper[4897]: I0228 13:33:16.079494 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xqdlt" Feb 28 13:33:18 crc kubenswrapper[4897]: I0228 13:33:18.785063 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:18 crc kubenswrapper[4897]: I0228 13:33:18.785379 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:18 crc kubenswrapper[4897]: I0228 13:33:18.852628 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:19 crc kubenswrapper[4897]: I0228 13:33:19.154541 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:19 crc kubenswrapper[4897]: I0228 13:33:19.197259 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zc4hw"] Feb 28 13:33:20 crc kubenswrapper[4897]: I0228 13:33:20.115977 4897 generic.go:334] "Generic (PLEG): container finished" podID="02f6fadd-b5a9-4d44-aba2-303ab05f15c6" containerID="166a14f73a8d149d82cc0a41ca91803fca57259d31dd21953f79d6891df6fe29" exitCode=0 Feb 28 13:33:20 crc kubenswrapper[4897]: I0228 13:33:20.116093 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerDied","Data":"166a14f73a8d149d82cc0a41ca91803fca57259d31dd21953f79d6891df6fe29"} Feb 28 13:33:20 crc kubenswrapper[4897]: I0228 13:33:20.118846 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" event={"ID":"6019677c-387b-4cb8-9c0f-4607f2b5971c","Type":"ContainerStarted","Data":"562be7d5d5abee72b48c13ea27f9f634ef22d4c719113e1fdbb14741877cfeb2"} Feb 28 13:33:20 crc kubenswrapper[4897]: I0228 13:33:20.119124 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:20 crc kubenswrapper[4897]: I0228 13:33:20.182127 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" podStartSLOduration=2.062552271 podStartE2EDuration="9.182099725s" podCreationTimestamp="2026-02-28 13:33:11 +0000 UTC" firstStartedPulling="2026-02-28 13:33:12.280430481 +0000 UTC m=+1006.522751168" lastFinishedPulling="2026-02-28 13:33:19.399977965 +0000 UTC m=+1013.642298622" observedRunningTime="2026-02-28 13:33:20.175921425 +0000 UTC m=+1014.418242102" watchObservedRunningTime="2026-02-28 13:33:20.182099725 +0000 UTC m=+1014.424420382" Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.132274 4897 generic.go:334] "Generic (PLEG): container finished" podID="02f6fadd-b5a9-4d44-aba2-303ab05f15c6" containerID="c890558ed0cd7915acbb0dd09f8e9c1b4d33999b1543ace837344e8c34261c70" exitCode=0 Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.132413 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerDied","Data":"c890558ed0cd7915acbb0dd09f8e9c1b4d33999b1543ace837344e8c34261c70"} Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.133652 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zc4hw" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerName="registry-server" containerID="cri-o://e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3" gracePeriod=2 Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.650353 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.799296 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-catalog-content\") pod \"8aa66059-af3c-4c52-b26d-967dee7b208a\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.799407 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmsdn\" (UniqueName: \"kubernetes.io/projected/8aa66059-af3c-4c52-b26d-967dee7b208a-kube-api-access-tmsdn\") pod \"8aa66059-af3c-4c52-b26d-967dee7b208a\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.799465 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-utilities\") pod \"8aa66059-af3c-4c52-b26d-967dee7b208a\" (UID: \"8aa66059-af3c-4c52-b26d-967dee7b208a\") " Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.800621 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-utilities" (OuterVolumeSpecName: "utilities") pod "8aa66059-af3c-4c52-b26d-967dee7b208a" (UID: "8aa66059-af3c-4c52-b26d-967dee7b208a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.801317 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.806538 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa66059-af3c-4c52-b26d-967dee7b208a-kube-api-access-tmsdn" (OuterVolumeSpecName: "kube-api-access-tmsdn") pod "8aa66059-af3c-4c52-b26d-967dee7b208a" (UID: "8aa66059-af3c-4c52-b26d-967dee7b208a"). InnerVolumeSpecName "kube-api-access-tmsdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:33:21 crc kubenswrapper[4897]: I0228 13:33:21.902670 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmsdn\" (UniqueName: \"kubernetes.io/projected/8aa66059-af3c-4c52-b26d-967dee7b208a-kube-api-access-tmsdn\") on node \"crc\" DevicePath \"\"" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.141160 4897 generic.go:334] "Generic (PLEG): container finished" podID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerID="e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3" exitCode=0 Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.141225 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc4hw" event={"ID":"8aa66059-af3c-4c52-b26d-967dee7b208a","Type":"ContainerDied","Data":"e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3"} Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.141251 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc4hw" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.141273 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc4hw" event={"ID":"8aa66059-af3c-4c52-b26d-967dee7b208a","Type":"ContainerDied","Data":"af3997c7cb743167c5fdea7858aa467678880cdfbd0dcac071d31932c3e136f8"} Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.141291 4897 scope.go:117] "RemoveContainer" containerID="e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.144804 4897 generic.go:334] "Generic (PLEG): container finished" podID="02f6fadd-b5a9-4d44-aba2-303ab05f15c6" containerID="718d607b7770872bd1e1b2190141bd5754c1b640df601917cc4b686f73152144" exitCode=0 Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.144840 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerDied","Data":"718d607b7770872bd1e1b2190141bd5754c1b640df601917cc4b686f73152144"} Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.161129 4897 scope.go:117] "RemoveContainer" containerID="8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.205939 4897 scope.go:117] "RemoveContainer" containerID="39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.227108 4897 scope.go:117] "RemoveContainer" containerID="e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3" Feb 28 13:33:22 crc kubenswrapper[4897]: E0228 13:33:22.227649 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3\": container with ID starting with e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3 not found: ID does not exist" containerID="e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.227698 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3"} err="failed to get container status \"e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3\": rpc error: code = NotFound desc = could not find container \"e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3\": container with ID starting with e6ccbd5534fe56fb2e4c84470caf530fadd3a1055a8c107ba311e06ea79a97a3 not found: ID does not exist" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.227724 4897 scope.go:117] "RemoveContainer" containerID="8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a" Feb 28 13:33:22 crc kubenswrapper[4897]: E0228 13:33:22.228277 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a\": container with ID starting with 8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a not found: ID does not exist" containerID="8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.228360 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a"} err="failed to get container status \"8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a\": rpc error: code = NotFound desc = could not find container \"8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a\": container with ID starting with 8de903037cc6701524b7388d412861546c0483e069f2140070e136699935dd2a not found: ID does not exist" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.228408 4897 scope.go:117] "RemoveContainer" containerID="39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127" Feb 28 13:33:22 crc kubenswrapper[4897]: E0228 13:33:22.228904 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127\": container with ID starting with 39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127 not found: ID does not exist" containerID="39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127" Feb 28 13:33:22 crc kubenswrapper[4897]: I0228 13:33:22.228934 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127"} err="failed to get container status \"39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127\": rpc error: code = NotFound desc = could not find container \"39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127\": container with ID starting with 39d3fb5f1525f30da30e035d3c81399d47a5446752db09ac90736fbfdfa8f127 not found: ID does not exist" Feb 28 13:33:23 crc kubenswrapper[4897]: I0228 13:33:23.154113 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xqdlt" Feb 28 13:33:23 crc kubenswrapper[4897]: I0228 13:33:23.166564 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerStarted","Data":"e0a40b29fd3178d3cbc55e7abd77911ad5365feb2d5e0aa5f722858e9dd88131"} Feb 28 13:33:23 crc kubenswrapper[4897]: I0228 13:33:23.166609 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerStarted","Data":"69a4de0cfd322c27c62628fe4e13865a8ef37f03e0e4b56e69254167c665985d"} Feb 28 13:33:23 crc kubenswrapper[4897]: I0228 13:33:23.166622 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerStarted","Data":"81a1ba86ea2dfd320c3954c5fa16313242a7cb0f5aefdbc4e32ba3422060a88c"} Feb 28 13:33:23 crc kubenswrapper[4897]: I0228 13:33:23.310908 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8aa66059-af3c-4c52-b26d-967dee7b208a" (UID: "8aa66059-af3c-4c52-b26d-967dee7b208a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:33:23 crc kubenswrapper[4897]: I0228 13:33:23.324773 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aa66059-af3c-4c52-b26d-967dee7b208a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:33:23 crc kubenswrapper[4897]: I0228 13:33:23.373599 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zc4hw"] Feb 28 13:33:23 crc kubenswrapper[4897]: I0228 13:33:23.378462 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zc4hw"] Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.179854 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerStarted","Data":"b0e284c8bf8d9e20a5833ce505fdd6cfcce87c1663cd7a74e87195fe4ed6f791"} Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.179908 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerStarted","Data":"8a1748bdf82fd06bc167cc3db549fc25d477b56921330622ebc2304c263b3829"} Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.179921 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mct2w" event={"ID":"02f6fadd-b5a9-4d44-aba2-303ab05f15c6","Type":"ContainerStarted","Data":"e7e97204e2070840a7bc178cdba3b691fd1fe96b2bf6840197050663fc9e27ec"} Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.180061 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.218423 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mct2w" podStartSLOduration=5.714178347 podStartE2EDuration="13.218398494s" podCreationTimestamp="2026-02-28 13:33:11 +0000 UTC" firstStartedPulling="2026-02-28 13:33:11.897281032 +0000 UTC m=+1006.139601689" lastFinishedPulling="2026-02-28 13:33:19.401501139 +0000 UTC m=+1013.643821836" observedRunningTime="2026-02-28 13:33:24.213822661 +0000 UTC m=+1018.456143328" watchObservedRunningTime="2026-02-28 13:33:24.218398494 +0000 UTC m=+1018.460719151" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.465396 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" path="/var/lib/kubelet/pods/8aa66059-af3c-4c52-b26d-967dee7b208a/volumes" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.598687 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-29kqk"] Feb 28 13:33:24 crc kubenswrapper[4897]: E0228 13:33:24.599380 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerName="extract-utilities" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.599403 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerName="extract-utilities" Feb 28 13:33:24 crc kubenswrapper[4897]: E0228 13:33:24.599425 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerName="extract-content" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.599439 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerName="extract-content" Feb 28 13:33:24 crc kubenswrapper[4897]: E0228 13:33:24.599472 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerName="registry-server" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.599487 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerName="registry-server" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.599739 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aa66059-af3c-4c52-b26d-967dee7b208a" containerName="registry-server" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.601477 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.616077 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-29kqk"] Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.743182 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-utilities\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.743254 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-catalog-content\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.743362 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wpnn\" (UniqueName: \"kubernetes.io/projected/dbe86f80-68e4-4170-8801-cea07c362d5c-kube-api-access-7wpnn\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.844483 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-utilities\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.844566 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-catalog-content\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.844640 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wpnn\" (UniqueName: \"kubernetes.io/projected/dbe86f80-68e4-4170-8801-cea07c362d5c-kube-api-access-7wpnn\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.845166 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-utilities\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.845202 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-catalog-content\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.874197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wpnn\" (UniqueName: \"kubernetes.io/projected/dbe86f80-68e4-4170-8801-cea07c362d5c-kube-api-access-7wpnn\") pod \"community-operators-29kqk\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:24 crc kubenswrapper[4897]: I0228 13:33:24.926638 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:33:25 crc kubenswrapper[4897]: I0228 13:33:25.434291 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-29kqk"] Feb 28 13:33:25 crc kubenswrapper[4897]: W0228 13:33:25.435846 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe86f80_68e4_4170_8801_cea07c362d5c.slice/crio-8f12ae6bf42d83a06397d395a482a5a883eb7ee12482efa42dde02011514402e WatchSource:0}: Error finding container 8f12ae6bf42d83a06397d395a482a5a883eb7ee12482efa42dde02011514402e: Status 404 returned error can't find the container with id 8f12ae6bf42d83a06397d395a482a5a883eb7ee12482efa42dde02011514402e Feb 28 13:33:26 crc kubenswrapper[4897]: I0228 13:33:26.221285 4897 generic.go:334] "Generic (PLEG): container finished" podID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerID="62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404" exitCode=0 Feb 28 13:33:26 crc kubenswrapper[4897]: I0228 13:33:26.221353 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29kqk" event={"ID":"dbe86f80-68e4-4170-8801-cea07c362d5c","Type":"ContainerDied","Data":"62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404"} Feb 28 13:33:26 crc kubenswrapper[4897]: I0228 13:33:26.222182 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29kqk" event={"ID":"dbe86f80-68e4-4170-8801-cea07c362d5c","Type":"ContainerStarted","Data":"8f12ae6bf42d83a06397d395a482a5a883eb7ee12482efa42dde02011514402e"} Feb 28 13:33:26 crc kubenswrapper[4897]: I0228 13:33:26.628922 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:26 crc kubenswrapper[4897]: I0228 13:33:26.669217 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:26 crc kubenswrapper[4897]: E0228 13:33:26.787582 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:33:26 crc kubenswrapper[4897]: E0228 13:33:26.787956 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wpnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-29kqk_openshift-marketplace(dbe86f80-68e4-4170-8801-cea07c362d5c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:33:26 crc kubenswrapper[4897]: E0228 13:33:26.789357 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:33:27 crc kubenswrapper[4897]: E0228 13:33:27.227281 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:33:29 crc kubenswrapper[4897]: I0228 13:33:29.806023 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9tgxh"] Feb 28 13:33:29 crc kubenswrapper[4897]: I0228 13:33:29.808232 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:29 crc kubenswrapper[4897]: I0228 13:33:29.812880 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 28 13:33:29 crc kubenswrapper[4897]: I0228 13:33:29.814388 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-h8nx7" Feb 28 13:33:29 crc kubenswrapper[4897]: I0228 13:33:29.814845 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 28 13:33:29 crc kubenswrapper[4897]: I0228 13:33:29.826253 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9tgxh"] Feb 28 13:33:29 crc kubenswrapper[4897]: I0228 13:33:29.926430 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mg7\" (UniqueName: \"kubernetes.io/projected/e5918346-7c71-4d39-985f-c8893e107670-kube-api-access-86mg7\") pod \"openstack-operator-index-9tgxh\" (UID: \"e5918346-7c71-4d39-985f-c8893e107670\") " pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:30 crc kubenswrapper[4897]: I0228 13:33:30.028540 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86mg7\" (UniqueName: \"kubernetes.io/projected/e5918346-7c71-4d39-985f-c8893e107670-kube-api-access-86mg7\") pod \"openstack-operator-index-9tgxh\" (UID: \"e5918346-7c71-4d39-985f-c8893e107670\") " pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:30 crc kubenswrapper[4897]: I0228 13:33:30.065116 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86mg7\" (UniqueName: \"kubernetes.io/projected/e5918346-7c71-4d39-985f-c8893e107670-kube-api-access-86mg7\") pod \"openstack-operator-index-9tgxh\" (UID: \"e5918346-7c71-4d39-985f-c8893e107670\") " pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:30 crc kubenswrapper[4897]: I0228 13:33:30.139706 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:30 crc kubenswrapper[4897]: I0228 13:33:30.431949 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9tgxh"] Feb 28 13:33:31 crc kubenswrapper[4897]: I0228 13:33:31.263426 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9tgxh" event={"ID":"e5918346-7c71-4d39-985f-c8893e107670","Type":"ContainerStarted","Data":"5288835e4f49bdb9f544d52751a04aa574f152b95286ccd2df5318ebdb79eb8e"} Feb 28 13:33:31 crc kubenswrapper[4897]: I0228 13:33:31.678706 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-jz56q" Feb 28 13:33:31 crc kubenswrapper[4897]: I0228 13:33:31.866981 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-m4dnz" Feb 28 13:33:33 crc kubenswrapper[4897]: I0228 13:33:33.370566 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:33:33 crc kubenswrapper[4897]: I0228 13:33:33.371877 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:33:33 crc kubenswrapper[4897]: I0228 13:33:33.372025 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:33:33 crc kubenswrapper[4897]: I0228 13:33:33.372778 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba683f1199708260a29f4bdafd88105c75a046d1fe9faa93c033d9e42ddff022"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:33:33 crc kubenswrapper[4897]: I0228 13:33:33.372933 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://ba683f1199708260a29f4bdafd88105c75a046d1fe9faa93c033d9e42ddff022" gracePeriod=600 Feb 28 13:33:34 crc kubenswrapper[4897]: I0228 13:33:34.294163 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="ba683f1199708260a29f4bdafd88105c75a046d1fe9faa93c033d9e42ddff022" exitCode=0 Feb 28 13:33:34 crc kubenswrapper[4897]: I0228 13:33:34.294885 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"ba683f1199708260a29f4bdafd88105c75a046d1fe9faa93c033d9e42ddff022"} Feb 28 13:33:34 crc kubenswrapper[4897]: I0228 13:33:34.295507 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"9c1430618bfc0c64d7fc6435ca448e45cbed910b3af28fa0f1da0886835a239f"} Feb 28 13:33:34 crc kubenswrapper[4897]: I0228 13:33:34.295538 4897 scope.go:117] "RemoveContainer" containerID="cfa26661db45aebf66711b46c418e18106a8f8b0c44a8fe4fe4cb2094fde5cf6" Feb 28 13:33:34 crc kubenswrapper[4897]: I0228 13:33:34.299637 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9tgxh" event={"ID":"e5918346-7c71-4d39-985f-c8893e107670","Type":"ContainerStarted","Data":"605b99a7a063d37e435acd386523414ebf6f90ed4d3fe0c005194c1e7721b15a"} Feb 28 13:33:34 crc kubenswrapper[4897]: I0228 13:33:34.348249 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9tgxh" podStartSLOduration=2.682444525 podStartE2EDuration="5.34821773s" podCreationTimestamp="2026-02-28 13:33:29 +0000 UTC" firstStartedPulling="2026-02-28 13:33:30.437215977 +0000 UTC m=+1024.679536634" lastFinishedPulling="2026-02-28 13:33:33.102989182 +0000 UTC m=+1027.345309839" observedRunningTime="2026-02-28 13:33:34.341600627 +0000 UTC m=+1028.583921324" watchObservedRunningTime="2026-02-28 13:33:34.34821773 +0000 UTC m=+1028.590538427" Feb 28 13:33:40 crc kubenswrapper[4897]: I0228 13:33:40.140542 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:40 crc kubenswrapper[4897]: I0228 13:33:40.141285 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:40 crc kubenswrapper[4897]: I0228 13:33:40.185508 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:40 crc kubenswrapper[4897]: I0228 13:33:40.400954 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-9tgxh" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.642285 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mct2w" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.653868 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8"] Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.657365 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.664091 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4899n" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.664179 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8"] Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.741895 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqn2\" (UniqueName: \"kubernetes.io/projected/18614093-3dcd-426c-8821-d04f854a475c-kube-api-access-4fqn2\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.742073 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-util\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.742101 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-bundle\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.843857 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-util\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.843923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-bundle\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.844004 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fqn2\" (UniqueName: \"kubernetes.io/projected/18614093-3dcd-426c-8821-d04f854a475c-kube-api-access-4fqn2\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.845225 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-bundle\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.846625 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-util\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.874478 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fqn2\" (UniqueName: \"kubernetes.io/projected/18614093-3dcd-426c-8821-d04f854a475c-kube-api-access-4fqn2\") pod \"c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:41 crc kubenswrapper[4897]: I0228 13:33:41.998836 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:42 crc kubenswrapper[4897]: E0228 13:33:42.027600 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:33:42 crc kubenswrapper[4897]: E0228 13:33:42.027739 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wpnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-29kqk_openshift-marketplace(dbe86f80-68e4-4170-8801-cea07c362d5c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:33:42 crc kubenswrapper[4897]: E0228 13:33:42.029472 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:33:42 crc kubenswrapper[4897]: I0228 13:33:42.281806 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8"] Feb 28 13:33:42 crc kubenswrapper[4897]: W0228 13:33:42.290385 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18614093_3dcd_426c_8821_d04f854a475c.slice/crio-994b7522b5041fa14d5cbdcf9b42d399b4ef70a49b939054a7dcb8f9baa3fbb3 WatchSource:0}: Error finding container 994b7522b5041fa14d5cbdcf9b42d399b4ef70a49b939054a7dcb8f9baa3fbb3: Status 404 returned error can't find the container with id 994b7522b5041fa14d5cbdcf9b42d399b4ef70a49b939054a7dcb8f9baa3fbb3 Feb 28 13:33:42 crc kubenswrapper[4897]: I0228 13:33:42.381218 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" event={"ID":"18614093-3dcd-426c-8821-d04f854a475c","Type":"ContainerStarted","Data":"994b7522b5041fa14d5cbdcf9b42d399b4ef70a49b939054a7dcb8f9baa3fbb3"} Feb 28 13:33:43 crc kubenswrapper[4897]: I0228 13:33:43.391512 4897 generic.go:334] "Generic (PLEG): container finished" podID="18614093-3dcd-426c-8821-d04f854a475c" containerID="2559e452844f9a9e99c378ad2ffa1bfabd8a9f93ab4046f5a7bf9e7e7a3a2e13" exitCode=0 Feb 28 13:33:43 crc kubenswrapper[4897]: I0228 13:33:43.391734 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" event={"ID":"18614093-3dcd-426c-8821-d04f854a475c","Type":"ContainerDied","Data":"2559e452844f9a9e99c378ad2ffa1bfabd8a9f93ab4046f5a7bf9e7e7a3a2e13"} Feb 28 13:33:44 crc kubenswrapper[4897]: I0228 13:33:44.404620 4897 generic.go:334] "Generic (PLEG): container finished" podID="18614093-3dcd-426c-8821-d04f854a475c" containerID="2d2355f7f04517a23469de64eccf09131686154437fa6286e8e7d34492a3ec96" exitCode=0 Feb 28 13:33:44 crc kubenswrapper[4897]: I0228 13:33:44.404683 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" event={"ID":"18614093-3dcd-426c-8821-d04f854a475c","Type":"ContainerDied","Data":"2d2355f7f04517a23469de64eccf09131686154437fa6286e8e7d34492a3ec96"} Feb 28 13:33:45 crc kubenswrapper[4897]: I0228 13:33:45.421132 4897 generic.go:334] "Generic (PLEG): container finished" podID="18614093-3dcd-426c-8821-d04f854a475c" containerID="7081516772c1575a21f0c002f48732b584ed02b6f9c61198a9933c795f839e6d" exitCode=0 Feb 28 13:33:45 crc kubenswrapper[4897]: I0228 13:33:45.421250 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" event={"ID":"18614093-3dcd-426c-8821-d04f854a475c","Type":"ContainerDied","Data":"7081516772c1575a21f0c002f48732b584ed02b6f9c61198a9933c795f839e6d"} Feb 28 13:33:46 crc kubenswrapper[4897]: I0228 13:33:46.772389 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:46 crc kubenswrapper[4897]: I0228 13:33:46.910947 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-bundle\") pod \"18614093-3dcd-426c-8821-d04f854a475c\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " Feb 28 13:33:46 crc kubenswrapper[4897]: I0228 13:33:46.911024 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fqn2\" (UniqueName: \"kubernetes.io/projected/18614093-3dcd-426c-8821-d04f854a475c-kube-api-access-4fqn2\") pod \"18614093-3dcd-426c-8821-d04f854a475c\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " Feb 28 13:33:46 crc kubenswrapper[4897]: I0228 13:33:46.911111 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-util\") pod \"18614093-3dcd-426c-8821-d04f854a475c\" (UID: \"18614093-3dcd-426c-8821-d04f854a475c\") " Feb 28 13:33:46 crc kubenswrapper[4897]: I0228 13:33:46.911729 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-bundle" (OuterVolumeSpecName: "bundle") pod "18614093-3dcd-426c-8821-d04f854a475c" (UID: "18614093-3dcd-426c-8821-d04f854a475c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:33:46 crc kubenswrapper[4897]: I0228 13:33:46.919959 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18614093-3dcd-426c-8821-d04f854a475c-kube-api-access-4fqn2" (OuterVolumeSpecName: "kube-api-access-4fqn2") pod "18614093-3dcd-426c-8821-d04f854a475c" (UID: "18614093-3dcd-426c-8821-d04f854a475c"). InnerVolumeSpecName "kube-api-access-4fqn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:33:46 crc kubenswrapper[4897]: I0228 13:33:46.928633 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-util" (OuterVolumeSpecName: "util") pod "18614093-3dcd-426c-8821-d04f854a475c" (UID: "18614093-3dcd-426c-8821-d04f854a475c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:33:47 crc kubenswrapper[4897]: I0228 13:33:47.013343 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:33:47 crc kubenswrapper[4897]: I0228 13:33:47.013427 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fqn2\" (UniqueName: \"kubernetes.io/projected/18614093-3dcd-426c-8821-d04f854a475c-kube-api-access-4fqn2\") on node \"crc\" DevicePath \"\"" Feb 28 13:33:47 crc kubenswrapper[4897]: I0228 13:33:47.013449 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18614093-3dcd-426c-8821-d04f854a475c-util\") on node \"crc\" DevicePath \"\"" Feb 28 13:33:47 crc kubenswrapper[4897]: I0228 13:33:47.444674 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" event={"ID":"18614093-3dcd-426c-8821-d04f854a475c","Type":"ContainerDied","Data":"994b7522b5041fa14d5cbdcf9b42d399b4ef70a49b939054a7dcb8f9baa3fbb3"} Feb 28 13:33:47 crc kubenswrapper[4897]: I0228 13:33:47.444771 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994b7522b5041fa14d5cbdcf9b42d399b4ef70a49b939054a7dcb8f9baa3fbb3" Feb 28 13:33:47 crc kubenswrapper[4897]: I0228 13:33:47.445206 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.358120 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4"] Feb 28 13:33:51 crc kubenswrapper[4897]: E0228 13:33:51.359079 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18614093-3dcd-426c-8821-d04f854a475c" containerName="pull" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.359100 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="18614093-3dcd-426c-8821-d04f854a475c" containerName="pull" Feb 28 13:33:51 crc kubenswrapper[4897]: E0228 13:33:51.359121 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18614093-3dcd-426c-8821-d04f854a475c" containerName="extract" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.359133 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="18614093-3dcd-426c-8821-d04f854a475c" containerName="extract" Feb 28 13:33:51 crc kubenswrapper[4897]: E0228 13:33:51.359159 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18614093-3dcd-426c-8821-d04f854a475c" containerName="util" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.359174 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="18614093-3dcd-426c-8821-d04f854a475c" containerName="util" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.359427 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="18614093-3dcd-426c-8821-d04f854a475c" containerName="extract" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.360183 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.362097 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-jphwc" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.394560 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4"] Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.470530 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8k8z\" (UniqueName: \"kubernetes.io/projected/f3e65b5d-7974-4323-92f1-50f5dbc0fe11-kube-api-access-k8k8z\") pod \"openstack-operator-controller-init-58b8f68975-4gtm4\" (UID: \"f3e65b5d-7974-4323-92f1-50f5dbc0fe11\") " pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.571706 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8k8z\" (UniqueName: \"kubernetes.io/projected/f3e65b5d-7974-4323-92f1-50f5dbc0fe11-kube-api-access-k8k8z\") pod \"openstack-operator-controller-init-58b8f68975-4gtm4\" (UID: \"f3e65b5d-7974-4323-92f1-50f5dbc0fe11\") " pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.599516 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8k8z\" (UniqueName: \"kubernetes.io/projected/f3e65b5d-7974-4323-92f1-50f5dbc0fe11-kube-api-access-k8k8z\") pod \"openstack-operator-controller-init-58b8f68975-4gtm4\" (UID: \"f3e65b5d-7974-4323-92f1-50f5dbc0fe11\") " pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" Feb 28 13:33:51 crc kubenswrapper[4897]: I0228 13:33:51.681638 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" Feb 28 13:33:52 crc kubenswrapper[4897]: I0228 13:33:52.146342 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4"] Feb 28 13:33:52 crc kubenswrapper[4897]: I0228 13:33:52.485392 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" event={"ID":"f3e65b5d-7974-4323-92f1-50f5dbc0fe11","Type":"ContainerStarted","Data":"2b1dede57032dcab6736de07da73013b3d8a16d868445df2d3352c81ca29cb7a"} Feb 28 13:33:54 crc kubenswrapper[4897]: E0228 13:33:54.516222 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:33:56 crc kubenswrapper[4897]: I0228 13:33:56.519712 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" event={"ID":"f3e65b5d-7974-4323-92f1-50f5dbc0fe11","Type":"ContainerStarted","Data":"8b7ce536f37adb882b9b36c1cd624a0592e4bd8916f942ef8fd5c9645b4ed2af"} Feb 28 13:33:56 crc kubenswrapper[4897]: I0228 13:33:56.520245 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" Feb 28 13:33:56 crc kubenswrapper[4897]: I0228 13:33:56.568862 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" podStartSLOduration=1.59362425 podStartE2EDuration="5.56883231s" podCreationTimestamp="2026-02-28 13:33:51 +0000 UTC" firstStartedPulling="2026-02-28 13:33:52.145703224 +0000 UTC m=+1046.388023881" lastFinishedPulling="2026-02-28 13:33:56.120911284 +0000 UTC m=+1050.363231941" observedRunningTime="2026-02-28 13:33:56.557684714 +0000 UTC m=+1050.800005421" watchObservedRunningTime="2026-02-28 13:33:56.56883231 +0000 UTC m=+1050.811152997" Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.167290 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538094-cz9s2"] Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.168786 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.171766 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.172039 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.172146 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.176916 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538094-cz9s2"] Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.297121 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgm8\" (UniqueName: \"kubernetes.io/projected/db157ff1-ece6-4751-8cb5-89e894c98fae-kube-api-access-dvgm8\") pod \"auto-csr-approver-29538094-cz9s2\" (UID: \"db157ff1-ece6-4751-8cb5-89e894c98fae\") " pod="openshift-infra/auto-csr-approver-29538094-cz9s2" Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.399376 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvgm8\" (UniqueName: \"kubernetes.io/projected/db157ff1-ece6-4751-8cb5-89e894c98fae-kube-api-access-dvgm8\") pod \"auto-csr-approver-29538094-cz9s2\" (UID: \"db157ff1-ece6-4751-8cb5-89e894c98fae\") " pod="openshift-infra/auto-csr-approver-29538094-cz9s2" Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.433504 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvgm8\" (UniqueName: \"kubernetes.io/projected/db157ff1-ece6-4751-8cb5-89e894c98fae-kube-api-access-dvgm8\") pod \"auto-csr-approver-29538094-cz9s2\" (UID: \"db157ff1-ece6-4751-8cb5-89e894c98fae\") " pod="openshift-infra/auto-csr-approver-29538094-cz9s2" Feb 28 13:34:00 crc kubenswrapper[4897]: I0228 13:34:00.486466 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" Feb 28 13:34:01 crc kubenswrapper[4897]: W0228 13:34:01.007487 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb157ff1_ece6_4751_8cb5_89e894c98fae.slice/crio-e45768776d6ad66970e3600ca2b9fae633147180b7be42d4044d7c2a957cc74f WatchSource:0}: Error finding container e45768776d6ad66970e3600ca2b9fae633147180b7be42d4044d7c2a957cc74f: Status 404 returned error can't find the container with id e45768776d6ad66970e3600ca2b9fae633147180b7be42d4044d7c2a957cc74f Feb 28 13:34:01 crc kubenswrapper[4897]: I0228 13:34:01.014919 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538094-cz9s2"] Feb 28 13:34:01 crc kubenswrapper[4897]: I0228 13:34:01.569143 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" event={"ID":"db157ff1-ece6-4751-8cb5-89e894c98fae","Type":"ContainerStarted","Data":"e45768776d6ad66970e3600ca2b9fae633147180b7be42d4044d7c2a957cc74f"} Feb 28 13:34:01 crc kubenswrapper[4897]: I0228 13:34:01.684270 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-58b8f68975-4gtm4" Feb 28 13:34:01 crc kubenswrapper[4897]: E0228 13:34:01.905484 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:34:01 crc kubenswrapper[4897]: E0228 13:34:01.905801 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:34:01 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:34:01 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvgm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538094-cz9s2_openshift-infra(db157ff1-ece6-4751-8cb5-89e894c98fae): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:34:01 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:34:01 crc kubenswrapper[4897]: E0228 13:34:01.907162 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" podUID="db157ff1-ece6-4751-8cb5-89e894c98fae" Feb 28 13:34:02 crc kubenswrapper[4897]: E0228 13:34:02.598420 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" podUID="db157ff1-ece6-4751-8cb5-89e894c98fae" Feb 28 13:34:07 crc kubenswrapper[4897]: E0228 13:34:07.020662 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:34:07 crc kubenswrapper[4897]: E0228 13:34:07.021108 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wpnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-29kqk_openshift-marketplace(dbe86f80-68e4-4170-8801-cea07c362d5c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:34:07 crc kubenswrapper[4897]: E0228 13:34:07.022463 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:34:16 crc kubenswrapper[4897]: I0228 13:34:16.708230 4897 generic.go:334] "Generic (PLEG): container finished" podID="db157ff1-ece6-4751-8cb5-89e894c98fae" containerID="d1f12039299e7cb97da8945c9666147f1e47e6cd32c2dedad659145cbb5b669a" exitCode=0 Feb 28 13:34:16 crc kubenswrapper[4897]: I0228 13:34:16.708355 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" event={"ID":"db157ff1-ece6-4751-8cb5-89e894c98fae","Type":"ContainerDied","Data":"d1f12039299e7cb97da8945c9666147f1e47e6cd32c2dedad659145cbb5b669a"} Feb 28 13:34:18 crc kubenswrapper[4897]: I0228 13:34:18.077070 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" Feb 28 13:34:18 crc kubenswrapper[4897]: I0228 13:34:18.172548 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvgm8\" (UniqueName: \"kubernetes.io/projected/db157ff1-ece6-4751-8cb5-89e894c98fae-kube-api-access-dvgm8\") pod \"db157ff1-ece6-4751-8cb5-89e894c98fae\" (UID: \"db157ff1-ece6-4751-8cb5-89e894c98fae\") " Feb 28 13:34:18 crc kubenswrapper[4897]: I0228 13:34:18.178713 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db157ff1-ece6-4751-8cb5-89e894c98fae-kube-api-access-dvgm8" (OuterVolumeSpecName: "kube-api-access-dvgm8") pod "db157ff1-ece6-4751-8cb5-89e894c98fae" (UID: "db157ff1-ece6-4751-8cb5-89e894c98fae"). InnerVolumeSpecName "kube-api-access-dvgm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:34:18 crc kubenswrapper[4897]: I0228 13:34:18.274335 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvgm8\" (UniqueName: \"kubernetes.io/projected/db157ff1-ece6-4751-8cb5-89e894c98fae-kube-api-access-dvgm8\") on node \"crc\" DevicePath \"\"" Feb 28 13:34:18 crc kubenswrapper[4897]: I0228 13:34:18.722599 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" event={"ID":"db157ff1-ece6-4751-8cb5-89e894c98fae","Type":"ContainerDied","Data":"e45768776d6ad66970e3600ca2b9fae633147180b7be42d4044d7c2a957cc74f"} Feb 28 13:34:18 crc kubenswrapper[4897]: I0228 13:34:18.722634 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45768776d6ad66970e3600ca2b9fae633147180b7be42d4044d7c2a957cc74f" Feb 28 13:34:18 crc kubenswrapper[4897]: I0228 13:34:18.722689 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538094-cz9s2" Feb 28 13:34:19 crc kubenswrapper[4897]: I0228 13:34:19.138975 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538088-4c9j2"] Feb 28 13:34:19 crc kubenswrapper[4897]: I0228 13:34:19.149531 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538088-4c9j2"] Feb 28 13:34:19 crc kubenswrapper[4897]: E0228 13:34:19.458437 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:34:20 crc kubenswrapper[4897]: I0228 13:34:20.465506 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e65966-94bd-4c6f-9e02-1d3f10577480" path="/var/lib/kubelet/pods/43e65966-94bd-4c6f-9e02-1d3f10577480/volumes" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.941967 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs"] Feb 28 13:34:21 crc kubenswrapper[4897]: E0228 13:34:21.942693 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db157ff1-ece6-4751-8cb5-89e894c98fae" containerName="oc" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.942719 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="db157ff1-ece6-4751-8cb5-89e894c98fae" containerName="oc" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.942949 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="db157ff1-ece6-4751-8cb5-89e894c98fae" containerName="oc" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.943657 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.946806 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-7qmtr" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.949371 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr"] Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.950589 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.953089 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs"] Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.953372 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6rz55" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.964539 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr"] Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.988139 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4"] Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.989114 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" Feb 28 13:34:21 crc kubenswrapper[4897]: I0228 13:34:21.997435 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fr5s4" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.020758 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.021580 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.026836 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xtnml" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.031449 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.045035 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.045898 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.047639 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7tsgz" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.053091 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.062470 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.079935 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.080734 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.085828 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-sczp2" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.091775 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.098691 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.099772 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.103364 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gcb4t" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.103743 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.104779 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.112874 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.115233 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.120337 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tqfrd" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.126946 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.127785 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.130542 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2p4qv" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.131279 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mprw\" (UniqueName: \"kubernetes.io/projected/5863afa6-053e-4d6c-899e-c31dcc30dcf3-kube-api-access-6mprw\") pod \"glance-operator-controller-manager-64db6967f8-4tvzl\" (UID: \"5863afa6-053e-4d6c-899e-c31dcc30dcf3\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.131326 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq5gf\" (UniqueName: \"kubernetes.io/projected/a78107ef-804f-476a-98f4-195f52927c3d-kube-api-access-hq5gf\") pod \"designate-operator-controller-manager-5d87c9d997-hgfm4\" (UID: \"a78107ef-804f-476a-98f4-195f52927c3d\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.131364 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ld9w\" (UniqueName: \"kubernetes.io/projected/1d330dac-b70b-4af0-bfa0-1fba21022fb1-kube-api-access-2ld9w\") pod \"barbican-operator-controller-manager-6db6876945-96lzs\" (UID: \"1d330dac-b70b-4af0-bfa0-1fba21022fb1\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.131526 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24psz\" (UniqueName: \"kubernetes.io/projected/5ef2847d-3e11-419b-b34c-3f4cb5643af9-kube-api-access-24psz\") pod \"cinder-operator-controller-manager-55d77d7b5c-d8psr\" (UID: \"5ef2847d-3e11-419b-b34c-3f4cb5643af9\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.158217 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-h65l6"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.159262 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.163563 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-hh99v" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.191661 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.192544 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.197769 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nkhh5" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.200835 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.209235 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-h65l6"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.214820 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.224716 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.227531 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.229443 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.237203 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tnp9l" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303141 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct2qg\" (UniqueName: \"kubernetes.io/projected/3bfb71f8-fd2c-4730-af54-601ec4daebaf-kube-api-access-ct2qg\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303227 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ld9w\" (UniqueName: \"kubernetes.io/projected/1d330dac-b70b-4af0-bfa0-1fba21022fb1-kube-api-access-2ld9w\") pod \"barbican-operator-controller-manager-6db6876945-96lzs\" (UID: \"1d330dac-b70b-4af0-bfa0-1fba21022fb1\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303462 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctcv\" (UniqueName: \"kubernetes.io/projected/30b14df1-8f3e-427c-b6d9-eb8aeb192213-kube-api-access-mctcv\") pod \"keystone-operator-controller-manager-7c789f89c6-fm9lk\" (UID: \"30b14df1-8f3e-427c-b6d9-eb8aeb192213\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303491 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24psz\" (UniqueName: \"kubernetes.io/projected/5ef2847d-3e11-419b-b34c-3f4cb5643af9-kube-api-access-24psz\") pod \"cinder-operator-controller-manager-55d77d7b5c-d8psr\" (UID: \"5ef2847d-3e11-419b-b34c-3f4cb5643af9\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303522 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303549 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhfnh\" (UniqueName: \"kubernetes.io/projected/507c84e1-3826-47ad-93f4-c2d6d726f8b7-kube-api-access-lhfnh\") pod \"ironic-operator-controller-manager-545456dc4-cfsb9\" (UID: \"507c84e1-3826-47ad-93f4-c2d6d726f8b7\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303573 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnvk\" (UniqueName: \"kubernetes.io/projected/e7498ffc-cb24-44e8-b0cb-4ada46db9e4c-kube-api-access-mcnvk\") pod \"heat-operator-controller-manager-cf99c678f-pjmt7\" (UID: \"e7498ffc-cb24-44e8-b0cb-4ada46db9e4c\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303600 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22257\" (UniqueName: \"kubernetes.io/projected/cf8aae65-a739-4ab3-8208-ae8ac4ed0671-kube-api-access-22257\") pod \"horizon-operator-controller-manager-78bc7f9bd9-qjg9q\" (UID: \"cf8aae65-a739-4ab3-8208-ae8ac4ed0671\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303620 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mprw\" (UniqueName: \"kubernetes.io/projected/5863afa6-053e-4d6c-899e-c31dcc30dcf3-kube-api-access-6mprw\") pod \"glance-operator-controller-manager-64db6967f8-4tvzl\" (UID: \"5863afa6-053e-4d6c-899e-c31dcc30dcf3\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.303638 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq5gf\" (UniqueName: \"kubernetes.io/projected/a78107ef-804f-476a-98f4-195f52927c3d-kube-api-access-hq5gf\") pod \"designate-operator-controller-manager-5d87c9d997-hgfm4\" (UID: \"a78107ef-804f-476a-98f4-195f52927c3d\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.307172 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.308128 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.312001 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dbbv6" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.319916 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.327246 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.328565 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.338870 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4sm65" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.339371 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.350187 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq5gf\" (UniqueName: \"kubernetes.io/projected/a78107ef-804f-476a-98f4-195f52927c3d-kube-api-access-hq5gf\") pod \"designate-operator-controller-manager-5d87c9d997-hgfm4\" (UID: \"a78107ef-804f-476a-98f4-195f52927c3d\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.353947 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mprw\" (UniqueName: \"kubernetes.io/projected/5863afa6-053e-4d6c-899e-c31dcc30dcf3-kube-api-access-6mprw\") pod \"glance-operator-controller-manager-64db6967f8-4tvzl\" (UID: \"5863afa6-053e-4d6c-899e-c31dcc30dcf3\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.354056 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ld9w\" (UniqueName: \"kubernetes.io/projected/1d330dac-b70b-4af0-bfa0-1fba21022fb1-kube-api-access-2ld9w\") pod \"barbican-operator-controller-manager-6db6876945-96lzs\" (UID: \"1d330dac-b70b-4af0-bfa0-1fba21022fb1\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.377841 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24psz\" (UniqueName: \"kubernetes.io/projected/5ef2847d-3e11-419b-b34c-3f4cb5643af9-kube-api-access-24psz\") pod \"cinder-operator-controller-manager-55d77d7b5c-d8psr\" (UID: \"5ef2847d-3e11-419b-b34c-3f4cb5643af9\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.393361 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.396692 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.397596 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.401257 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.401748 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-24z5v" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.401951 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.402055 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.405864 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22257\" (UniqueName: \"kubernetes.io/projected/cf8aae65-a739-4ab3-8208-ae8ac4ed0671-kube-api-access-22257\") pod \"horizon-operator-controller-manager-78bc7f9bd9-qjg9q\" (UID: \"cf8aae65-a739-4ab3-8208-ae8ac4ed0671\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.405906 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvgh\" (UniqueName: \"kubernetes.io/projected/3664d59e-945d-4eb5-9443-296e206a1081-kube-api-access-qnvgh\") pod \"neutron-operator-controller-manager-54688575f-7lr7s\" (UID: \"3664d59e-945d-4eb5-9443-296e206a1081\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.405936 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rj4c\" (UniqueName: \"kubernetes.io/projected/30810ec7-8325-4bde-aa9d-ff905addb474-kube-api-access-5rj4c\") pod \"mariadb-operator-controller-manager-7b6bfb6475-6xfvp\" (UID: \"30810ec7-8325-4bde-aa9d-ff905addb474\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.405955 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct2qg\" (UniqueName: \"kubernetes.io/projected/3bfb71f8-fd2c-4730-af54-601ec4daebaf-kube-api-access-ct2qg\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.405984 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhv9\" (UniqueName: \"kubernetes.io/projected/a9935d62-a205-4294-a124-313a8437c1ab-kube-api-access-pkhv9\") pod \"nova-operator-controller-manager-74b6b5dc96-wqgwr\" (UID: \"a9935d62-a205-4294-a124-313a8437c1ab\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.406001 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mctcv\" (UniqueName: \"kubernetes.io/projected/30b14df1-8f3e-427c-b6d9-eb8aeb192213-kube-api-access-mctcv\") pod \"keystone-operator-controller-manager-7c789f89c6-fm9lk\" (UID: \"30b14df1-8f3e-427c-b6d9-eb8aeb192213\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.406016 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8l2b\" (UniqueName: \"kubernetes.io/projected/c90ec355-3eb2-43e5-9a39-eed72bb46d1b-kube-api-access-j8l2b\") pod \"manila-operator-controller-manager-67d996989d-h65l6\" (UID: \"c90ec355-3eb2-43e5-9a39-eed72bb46d1b\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.406044 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.406071 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhfnh\" (UniqueName: \"kubernetes.io/projected/507c84e1-3826-47ad-93f4-c2d6d726f8b7-kube-api-access-lhfnh\") pod \"ironic-operator-controller-manager-545456dc4-cfsb9\" (UID: \"507c84e1-3826-47ad-93f4-c2d6d726f8b7\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.406093 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcnvk\" (UniqueName: \"kubernetes.io/projected/e7498ffc-cb24-44e8-b0cb-4ada46db9e4c-kube-api-access-mcnvk\") pod \"heat-operator-controller-manager-cf99c678f-pjmt7\" (UID: \"e7498ffc-cb24-44e8-b0cb-4ada46db9e4c\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.406721 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.406764 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert podName:3bfb71f8-fd2c-4730-af54-601ec4daebaf nodeName:}" failed. No retries permitted until 2026-02-28 13:34:22.906750172 +0000 UTC m=+1077.149070829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert") pod "infra-operator-controller-manager-f7fcc58b9-bb7d9" (UID: "3bfb71f8-fd2c-4730-af54-601ec4daebaf") : secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.407123 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-95tjp" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.412366 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.413250 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.423367 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.424731 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gc7fl" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.427363 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.435027 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcnvk\" (UniqueName: \"kubernetes.io/projected/e7498ffc-cb24-44e8-b0cb-4ada46db9e4c-kube-api-access-mcnvk\") pod \"heat-operator-controller-manager-cf99c678f-pjmt7\" (UID: \"e7498ffc-cb24-44e8-b0cb-4ada46db9e4c\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.435927 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mctcv\" (UniqueName: \"kubernetes.io/projected/30b14df1-8f3e-427c-b6d9-eb8aeb192213-kube-api-access-mctcv\") pod \"keystone-operator-controller-manager-7c789f89c6-fm9lk\" (UID: \"30b14df1-8f3e-427c-b6d9-eb8aeb192213\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.436934 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.437562 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhfnh\" (UniqueName: \"kubernetes.io/projected/507c84e1-3826-47ad-93f4-c2d6d726f8b7-kube-api-access-lhfnh\") pod \"ironic-operator-controller-manager-545456dc4-cfsb9\" (UID: \"507c84e1-3826-47ad-93f4-c2d6d726f8b7\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.442836 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.443568 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.444022 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.446266 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.451180 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-hk2lz" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.454186 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22257\" (UniqueName: \"kubernetes.io/projected/cf8aae65-a739-4ab3-8208-ae8ac4ed0671-kube-api-access-22257\") pod \"horizon-operator-controller-manager-78bc7f9bd9-qjg9q\" (UID: \"cf8aae65-a739-4ab3-8208-ae8ac4ed0671\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.454962 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct2qg\" (UniqueName: \"kubernetes.io/projected/3bfb71f8-fd2c-4730-af54-601ec4daebaf-kube-api-access-ct2qg\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.479134 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.499767 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.500600 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.502183 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-xkb6t" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507484 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rj4c\" (UniqueName: \"kubernetes.io/projected/30810ec7-8325-4bde-aa9d-ff905addb474-kube-api-access-5rj4c\") pod \"mariadb-operator-controller-manager-7b6bfb6475-6xfvp\" (UID: \"30810ec7-8325-4bde-aa9d-ff905addb474\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507545 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhv9\" (UniqueName: \"kubernetes.io/projected/a9935d62-a205-4294-a124-313a8437c1ab-kube-api-access-pkhv9\") pod \"nova-operator-controller-manager-74b6b5dc96-wqgwr\" (UID: \"a9935d62-a205-4294-a124-313a8437c1ab\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507567 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gfvf\" (UniqueName: \"kubernetes.io/projected/3d635198-c21d-4d2e-9393-ad9b6cdf462f-kube-api-access-8gfvf\") pod \"ovn-operator-controller-manager-75684d597f-p6pbb\" (UID: \"3d635198-c21d-4d2e-9393-ad9b6cdf462f\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507586 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8l2b\" (UniqueName: \"kubernetes.io/projected/c90ec355-3eb2-43e5-9a39-eed72bb46d1b-kube-api-access-j8l2b\") pod \"manila-operator-controller-manager-67d996989d-h65l6\" (UID: \"c90ec355-3eb2-43e5-9a39-eed72bb46d1b\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507657 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507678 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scths\" (UniqueName: \"kubernetes.io/projected/b237a99b-2fe2-4804-880b-03494df684d2-kube-api-access-scths\") pod \"octavia-operator-controller-manager-5d86c7ddb7-wrf59\" (UID: \"b237a99b-2fe2-4804-880b-03494df684d2\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507750 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz85k\" (UniqueName: \"kubernetes.io/projected/fe6be473-8403-4c9d-abf6-a7a0251326f9-kube-api-access-hz85k\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507770 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjh2m\" (UniqueName: \"kubernetes.io/projected/f5e3f361-0ca8-4a8f-8625-8ea90c292ac2-kube-api-access-mjh2m\") pod \"telemetry-operator-controller-manager-5fdb694969-6r8pc\" (UID: \"f5e3f361-0ca8-4a8f-8625-8ea90c292ac2\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507791 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnksh\" (UniqueName: \"kubernetes.io/projected/8c8044b8-c803-4b5b-916f-34c0c03ab619-kube-api-access-xnksh\") pod \"swift-operator-controller-manager-9b9ff9f4d-dnpdj\" (UID: \"8c8044b8-c803-4b5b-916f-34c0c03ab619\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507811 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj9pb\" (UniqueName: \"kubernetes.io/projected/c434bb35-55df-45b5-9eeb-ab9913f3fd5e-kube-api-access-gj9pb\") pod \"placement-operator-controller-manager-648564c9fc-hkkvm\" (UID: \"c434bb35-55df-45b5-9eeb-ab9913f3fd5e\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.507834 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnvgh\" (UniqueName: \"kubernetes.io/projected/3664d59e-945d-4eb5-9443-296e206a1081-kube-api-access-qnvgh\") pod \"neutron-operator-controller-manager-54688575f-7lr7s\" (UID: \"3664d59e-945d-4eb5-9443-296e206a1081\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.512613 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.534295 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnvgh\" (UniqueName: \"kubernetes.io/projected/3664d59e-945d-4eb5-9443-296e206a1081-kube-api-access-qnvgh\") pod \"neutron-operator-controller-manager-54688575f-7lr7s\" (UID: \"3664d59e-945d-4eb5-9443-296e206a1081\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.538592 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhv9\" (UniqueName: \"kubernetes.io/projected/a9935d62-a205-4294-a124-313a8437c1ab-kube-api-access-pkhv9\") pod \"nova-operator-controller-manager-74b6b5dc96-wqgwr\" (UID: \"a9935d62-a205-4294-a124-313a8437c1ab\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.541726 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.546087 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8l2b\" (UniqueName: \"kubernetes.io/projected/c90ec355-3eb2-43e5-9a39-eed72bb46d1b-kube-api-access-j8l2b\") pod \"manila-operator-controller-manager-67d996989d-h65l6\" (UID: \"c90ec355-3eb2-43e5-9a39-eed72bb46d1b\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.546155 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.548409 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rj4c\" (UniqueName: \"kubernetes.io/projected/30810ec7-8325-4bde-aa9d-ff905addb474-kube-api-access-5rj4c\") pod \"mariadb-operator-controller-manager-7b6bfb6475-6xfvp\" (UID: \"30810ec7-8325-4bde-aa9d-ff905addb474\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.556207 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2l2h4" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.556424 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.572116 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.583691 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.584171 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.603848 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.604703 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.605123 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.606032 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-zsz9c" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610192 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7w2\" (UniqueName: \"kubernetes.io/projected/37408ab3-7514-42a0-92e8-6c2a2710b9f0-kube-api-access-jt7w2\") pod \"test-operator-controller-manager-55b5ff4dbb-v7qbm\" (UID: \"37408ab3-7514-42a0-92e8-6c2a2710b9f0\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610236 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mlq\" (UniqueName: \"kubernetes.io/projected/c25839b5-c34e-4865-a5ad-4e10355f1953-kube-api-access-62mlq\") pod \"watcher-operator-controller-manager-69dbd6f547-4ng5q\" (UID: \"c25839b5-c34e-4865-a5ad-4e10355f1953\") " pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610266 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610287 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scths\" (UniqueName: \"kubernetes.io/projected/b237a99b-2fe2-4804-880b-03494df684d2-kube-api-access-scths\") pod \"octavia-operator-controller-manager-5d86c7ddb7-wrf59\" (UID: \"b237a99b-2fe2-4804-880b-03494df684d2\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610384 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz85k\" (UniqueName: \"kubernetes.io/projected/fe6be473-8403-4c9d-abf6-a7a0251326f9-kube-api-access-hz85k\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610405 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjh2m\" (UniqueName: \"kubernetes.io/projected/f5e3f361-0ca8-4a8f-8625-8ea90c292ac2-kube-api-access-mjh2m\") pod \"telemetry-operator-controller-manager-5fdb694969-6r8pc\" (UID: \"f5e3f361-0ca8-4a8f-8625-8ea90c292ac2\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610426 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnksh\" (UniqueName: \"kubernetes.io/projected/8c8044b8-c803-4b5b-916f-34c0c03ab619-kube-api-access-xnksh\") pod \"swift-operator-controller-manager-9b9ff9f4d-dnpdj\" (UID: \"8c8044b8-c803-4b5b-916f-34c0c03ab619\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610446 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj9pb\" (UniqueName: \"kubernetes.io/projected/c434bb35-55df-45b5-9eeb-ab9913f3fd5e-kube-api-access-gj9pb\") pod \"placement-operator-controller-manager-648564c9fc-hkkvm\" (UID: \"c434bb35-55df-45b5-9eeb-ab9913f3fd5e\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.610507 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gfvf\" (UniqueName: \"kubernetes.io/projected/3d635198-c21d-4d2e-9393-ad9b6cdf462f-kube-api-access-8gfvf\") pod \"ovn-operator-controller-manager-75684d597f-p6pbb\" (UID: \"3d635198-c21d-4d2e-9393-ad9b6cdf462f\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.611487 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.611541 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert podName:fe6be473-8403-4c9d-abf6-a7a0251326f9 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:23.111516088 +0000 UTC m=+1077.353836745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" (UID: "fe6be473-8403-4c9d-abf6-a7a0251326f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.639124 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.645063 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.645642 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gfvf\" (UniqueName: \"kubernetes.io/projected/3d635198-c21d-4d2e-9393-ad9b6cdf462f-kube-api-access-8gfvf\") pod \"ovn-operator-controller-manager-75684d597f-p6pbb\" (UID: \"3d635198-c21d-4d2e-9393-ad9b6cdf462f\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.659172 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.673449 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.669583 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scths\" (UniqueName: \"kubernetes.io/projected/b237a99b-2fe2-4804-880b-03494df684d2-kube-api-access-scths\") pod \"octavia-operator-controller-manager-5d86c7ddb7-wrf59\" (UID: \"b237a99b-2fe2-4804-880b-03494df684d2\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.678125 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz85k\" (UniqueName: \"kubernetes.io/projected/fe6be473-8403-4c9d-abf6-a7a0251326f9-kube-api-access-hz85k\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.679364 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjh2m\" (UniqueName: \"kubernetes.io/projected/f5e3f361-0ca8-4a8f-8625-8ea90c292ac2-kube-api-access-mjh2m\") pod \"telemetry-operator-controller-manager-5fdb694969-6r8pc\" (UID: \"f5e3f361-0ca8-4a8f-8625-8ea90c292ac2\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.683906 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj9pb\" (UniqueName: \"kubernetes.io/projected/c434bb35-55df-45b5-9eeb-ab9913f3fd5e-kube-api-access-gj9pb\") pod \"placement-operator-controller-manager-648564c9fc-hkkvm\" (UID: \"c434bb35-55df-45b5-9eeb-ab9913f3fd5e\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.693278 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnksh\" (UniqueName: \"kubernetes.io/projected/8c8044b8-c803-4b5b-916f-34c0c03ab619-kube-api-access-xnksh\") pod \"swift-operator-controller-manager-9b9ff9f4d-dnpdj\" (UID: \"8c8044b8-c803-4b5b-916f-34c0c03ab619\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.703937 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.705104 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.706763 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.712820 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-btbnz" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.713575 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.713830 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.724408 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.732349 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.741381 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7w2\" (UniqueName: \"kubernetes.io/projected/37408ab3-7514-42a0-92e8-6c2a2710b9f0-kube-api-access-jt7w2\") pod \"test-operator-controller-manager-55b5ff4dbb-v7qbm\" (UID: \"37408ab3-7514-42a0-92e8-6c2a2710b9f0\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.741443 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62mlq\" (UniqueName: \"kubernetes.io/projected/c25839b5-c34e-4865-a5ad-4e10355f1953-kube-api-access-62mlq\") pod \"watcher-operator-controller-manager-69dbd6f547-4ng5q\" (UID: \"c25839b5-c34e-4865-a5ad-4e10355f1953\") " pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.770284 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62mlq\" (UniqueName: \"kubernetes.io/projected/c25839b5-c34e-4865-a5ad-4e10355f1953-kube-api-access-62mlq\") pod \"watcher-operator-controller-manager-69dbd6f547-4ng5q\" (UID: \"c25839b5-c34e-4865-a5ad-4e10355f1953\") " pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.770376 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.771788 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.772656 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7w2\" (UniqueName: \"kubernetes.io/projected/37408ab3-7514-42a0-92e8-6c2a2710b9f0-kube-api-access-jt7w2\") pod \"test-operator-controller-manager-55b5ff4dbb-v7qbm\" (UID: \"37408ab3-7514-42a0-92e8-6c2a2710b9f0\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.777732 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tj6jz" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.780012 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg"] Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.795701 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.810980 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.843027 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.843074 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg5f8\" (UniqueName: \"kubernetes.io/projected/216a4a66-0783-4b6c-9884-370bd3a001a4-kube-api-access-rg5f8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w6zhg\" (UID: \"216a4a66-0783-4b6c-9884-370bd3a001a4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.843156 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.843190 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cglv7\" (UniqueName: \"kubernetes.io/projected/6532860c-c344-4a74-9189-4382f4865b58-kube-api-access-cglv7\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.849916 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.889529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.904020 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.944832 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.944887 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.944935 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cglv7\" (UniqueName: \"kubernetes.io/projected/6532860c-c344-4a74-9189-4382f4865b58-kube-api-access-cglv7\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.944967 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.945000 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg5f8\" (UniqueName: \"kubernetes.io/projected/216a4a66-0783-4b6c-9884-370bd3a001a4-kube-api-access-rg5f8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w6zhg\" (UID: \"216a4a66-0783-4b6c-9884-370bd3a001a4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.945057 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.945127 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert podName:3bfb71f8-fd2c-4730-af54-601ec4daebaf nodeName:}" failed. No retries permitted until 2026-02-28 13:34:23.945108756 +0000 UTC m=+1078.187429413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert") pod "infra-operator-controller-manager-f7fcc58b9-bb7d9" (UID: "3bfb71f8-fd2c-4730-af54-601ec4daebaf") : secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.945605 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.945632 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:23.445624502 +0000 UTC m=+1077.687945159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "webhook-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.945667 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: E0228 13:34:22.945686 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:23.445680273 +0000 UTC m=+1077.688000930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "metrics-server-cert" not found Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.965947 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.968178 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cglv7\" (UniqueName: \"kubernetes.io/projected/6532860c-c344-4a74-9189-4382f4865b58-kube-api-access-cglv7\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:22 crc kubenswrapper[4897]: I0228 13:34:22.980286 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg5f8\" (UniqueName: \"kubernetes.io/projected/216a4a66-0783-4b6c-9884-370bd3a001a4-kube-api-access-rg5f8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w6zhg\" (UID: \"216a4a66-0783-4b6c-9884-370bd3a001a4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.020148 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9"] Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.021696 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507c84e1_3826_47ad_93f4_c2d6d726f8b7.slice/crio-9ba6a16986fcc48543567f0ae40af0bbbce0aea0bdb1d95d4627dd1455e524ad WatchSource:0}: Error finding container 9ba6a16986fcc48543567f0ae40af0bbbce0aea0bdb1d95d4627dd1455e524ad: Status 404 returned error can't find the container with id 9ba6a16986fcc48543567f0ae40af0bbbce0aea0bdb1d95d4627dd1455e524ad Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.022021 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.023585 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.042759 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.044193 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30b14df1_8f3e_427c_b6d9_eb8aeb192213.slice/crio-bf92629195809d3419b055c7dd53579045d242d9e07913319715b4ff381f55cc WatchSource:0}: Error finding container bf92629195809d3419b055c7dd53579045d242d9e07913319715b4ff381f55cc: Status 404 returned error can't find the container with id bf92629195809d3419b055c7dd53579045d242d9e07913319715b4ff381f55cc Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.103568 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.148164 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.148355 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.148400 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert podName:fe6be473-8403-4c9d-abf6-a7a0251326f9 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:24.148387019 +0000 UTC m=+1078.390707676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" (UID: "fe6be473-8403-4c9d-abf6-a7a0251326f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.216708 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.391786 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.456182 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.456258 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.456624 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.456683 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:24.456667696 +0000 UTC m=+1078.698988353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "webhook-server-cert" not found Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.456789 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.456868 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:24.456849361 +0000 UTC m=+1078.699170018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "metrics-server-cert" not found Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.572914 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.590699 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s"] Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.626653 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3664d59e_945d_4eb5_9443_296e206a1081.slice/crio-57ec88642cc6f1f1bf43ae2c9eb5440ae749a07e3c92573341f612d8b79d9883 WatchSource:0}: Error finding container 57ec88642cc6f1f1bf43ae2c9eb5440ae749a07e3c92573341f612d8b79d9883: Status 404 returned error can't find the container with id 57ec88642cc6f1f1bf43ae2c9eb5440ae749a07e3c92573341f612d8b79d9883 Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.631317 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda78107ef_804f_476a_98f4_195f52927c3d.slice/crio-0e6ed5a6370b282508ba4c3ac02d0d7e60ddf470a4b55dfebbf8fb5d21934dfa WatchSource:0}: Error finding container 0e6ed5a6370b282508ba4c3ac02d0d7e60ddf470a4b55dfebbf8fb5d21934dfa: Status 404 returned error can't find the container with id 0e6ed5a6370b282508ba4c3ac02d0d7e60ddf470a4b55dfebbf8fb5d21934dfa Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.785011 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" event={"ID":"5ef2847d-3e11-419b-b34c-3f4cb5643af9","Type":"ContainerStarted","Data":"32b870bc3c3be98266e32faefe04385bd7bb1219dc7683cc40ca043af3546fd5"} Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.787899 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" event={"ID":"507c84e1-3826-47ad-93f4-c2d6d726f8b7","Type":"ContainerStarted","Data":"9ba6a16986fcc48543567f0ae40af0bbbce0aea0bdb1d95d4627dd1455e524ad"} Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.789374 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" event={"ID":"a78107ef-804f-476a-98f4-195f52927c3d","Type":"ContainerStarted","Data":"0e6ed5a6370b282508ba4c3ac02d0d7e60ddf470a4b55dfebbf8fb5d21934dfa"} Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.790995 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" event={"ID":"3664d59e-945d-4eb5-9443-296e206a1081","Type":"ContainerStarted","Data":"57ec88642cc6f1f1bf43ae2c9eb5440ae749a07e3c92573341f612d8b79d9883"} Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.792476 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" event={"ID":"30b14df1-8f3e-427c-b6d9-eb8aeb192213","Type":"ContainerStarted","Data":"bf92629195809d3419b055c7dd53579045d242d9e07913319715b4ff381f55cc"} Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.793938 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" event={"ID":"1d330dac-b70b-4af0-bfa0-1fba21022fb1","Type":"ContainerStarted","Data":"f411d2d9bdad6c9d4507b10a6c9f422e984653ae54df11d06839c7f667f16fd6"} Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.880921 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.896382 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.912435 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.912818 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7"] Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.913097 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc25839b5_c34e_4865_a5ad_4e10355f1953.slice/crio-7f9d053399fa952b3afcca61ef473a495ffff14110fc25c82ea193768891a5db WatchSource:0}: Error finding container 7f9d053399fa952b3afcca61ef473a495ffff14110fc25c82ea193768891a5db: Status 404 returned error can't find the container with id 7f9d053399fa952b3afcca61ef473a495ffff14110fc25c82ea193768891a5db Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.919771 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.925134 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.933705 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm"] Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.934894 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d635198_c21d_4d2e_9393_ad9b6cdf462f.slice/crio-4e7d00e0a579dc9c95d0eb3ad15bdda10c26de7891021ad6f164aef8e4f25681 WatchSource:0}: Error finding container 4e7d00e0a579dc9c95d0eb3ad15bdda10c26de7891021ad6f164aef8e4f25681: Status 404 returned error can't find the container with id 4e7d00e0a579dc9c95d0eb3ad15bdda10c26de7891021ad6f164aef8e4f25681 Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.936076 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7498ffc_cb24_44e8_b0cb_4ada46db9e4c.slice/crio-7fa4c906b31f737049fa714e533c738efc69e46473f7d7983e18867cd1c0c74a WatchSource:0}: Error finding container 7fa4c906b31f737049fa714e533c738efc69e46473f7d7983e18867cd1c0c74a: Status 404 returned error can't find the container with id 7fa4c906b31f737049fa714e533c738efc69e46473f7d7983e18867cd1c0c74a Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.937423 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb237a99b_2fe2_4804_880b_03494df684d2.slice/crio-a90c3dd75136c6b950eb09e0cd64e81d8fcee71fef56b823fa7604476303a128 WatchSource:0}: Error finding container a90c3dd75136c6b950eb09e0cd64e81d8fcee71fef56b823fa7604476303a128: Status 404 returned error can't find the container with id a90c3dd75136c6b950eb09e0cd64e81d8fcee71fef56b823fa7604476303a128 Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.942051 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj"] Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.948722 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf8aae65_a739_4ab3_8208_ae8ac4ed0671.slice/crio-56faf841a954a1b88accc2ec97346cf38e636acbe3464440614bd544cf4a143a WatchSource:0}: Error finding container 56faf841a954a1b88accc2ec97346cf38e636acbe3464440614bd544cf4a143a: Status 404 returned error can't find the container with id 56faf841a954a1b88accc2ec97346cf38e636acbe3464440614bd544cf4a143a Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.950241 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp"] Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.957648 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-h65l6"] Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.960785 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30810ec7_8325_4bde_aa9d_ff905addb474.slice/crio-4033eb31f2196165985741e1faea98961daa5a6caa345117e93b97864fbaf9a4 WatchSource:0}: Error finding container 4033eb31f2196165985741e1faea98961daa5a6caa345117e93b97864fbaf9a4: Status 404 returned error can't find the container with id 4033eb31f2196165985741e1faea98961daa5a6caa345117e93b97864fbaf9a4 Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.962708 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc434bb35_55df_45b5_9eeb_ab9913f3fd5e.slice/crio-bd20b5cb7bbc269ce9dc9970edf4e59cb9fbd17982f5bbbac2993106ca745187 WatchSource:0}: Error finding container bd20b5cb7bbc269ce9dc9970edf4e59cb9fbd17982f5bbbac2993106ca745187: Status 404 returned error can't find the container with id bd20b5cb7bbc269ce9dc9970edf4e59cb9fbd17982f5bbbac2993106ca745187 Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.963026 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.963173 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.963224 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert podName:3bfb71f8-fd2c-4730-af54-601ec4daebaf nodeName:}" failed. No retries permitted until 2026-02-28 13:34:25.963209178 +0000 UTC m=+1080.205529825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert") pod "infra-operator-controller-manager-f7fcc58b9-bb7d9" (UID: "3bfb71f8-fd2c-4730-af54-601ec4daebaf") : secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.964684 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9935d62_a205_4294_a124_313a8437c1ab.slice/crio-50e0bb129a605426f58d228f4f2183b5e11e213c469b62a7dfbe504cbdf16211 WatchSource:0}: Error finding container 50e0bb129a605426f58d228f4f2183b5e11e213c469b62a7dfbe504cbdf16211: Status 404 returned error can't find the container with id 50e0bb129a605426f58d228f4f2183b5e11e213c469b62a7dfbe504cbdf16211 Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.965327 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59"] Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.966219 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rj4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b6bfb6475-6xfvp_openstack-operators(30810ec7-8325-4bde-aa9d-ff905addb474): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.966612 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216a4a66_0783_4b6c_9884_370bd3a001a4.slice/crio-67bac68786122837753b6725d9e195a6167c66f5d22ddf4d4e0dc11b57c31ab0 WatchSource:0}: Error finding container 67bac68786122837753b6725d9e195a6167c66f5d22ddf4d4e0dc11b57c31ab0: Status 404 returned error can't find the container with id 67bac68786122837753b6725d9e195a6167c66f5d22ddf4d4e0dc11b57c31ab0 Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.966821 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gj9pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-648564c9fc-hkkvm_openstack-operators(c434bb35-55df-45b5-9eeb-ab9913f3fd5e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.967151 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkhv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-wqgwr_openstack-operators(a9935d62-a205-4294-a124-313a8437c1ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.967340 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" podUID="30810ec7-8325-4bde-aa9d-ff905addb474" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.967453 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8l2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-h65l6_openstack-operators(c90ec355-3eb2-43e5-9a39-eed72bb46d1b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.968107 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" podUID="c434bb35-55df-45b5-9eeb-ab9913f3fd5e" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.968501 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" podUID="a9935d62-a205-4294-a124-313a8437c1ab" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.968551 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" podUID="c90ec355-3eb2-43e5-9a39-eed72bb46d1b" Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.971014 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr"] Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.971935 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rg5f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-w6zhg_openstack-operators(216a4a66-0783-4b6c-9884-370bd3a001a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.973407 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" podUID="216a4a66-0783-4b6c-9884-370bd3a001a4" Feb 28 13:34:23 crc kubenswrapper[4897]: W0228 13:34:23.978062 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5e3f361_0ca8_4a8f_8625_8ea90c292ac2.slice/crio-3cab287e4d91faaf27ac3e3fbe4bd67d118ac6cbdf5b631f5e4e67cffbec006b WatchSource:0}: Error finding container 3cab287e4d91faaf27ac3e3fbe4bd67d118ac6cbdf5b631f5e4e67cffbec006b: Status 404 returned error can't find the container with id 3cab287e4d91faaf27ac3e3fbe4bd67d118ac6cbdf5b631f5e4e67cffbec006b Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.980651 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm"] Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.984035 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjh2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fdb694969-6r8pc_openstack-operators(f5e3f361-0ca8-4a8f-8625-8ea90c292ac2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 13:34:23 crc kubenswrapper[4897]: E0228 13:34:23.985679 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" podUID="f5e3f361-0ca8-4a8f-8625-8ea90c292ac2" Feb 28 13:34:23 crc kubenswrapper[4897]: I0228 13:34:23.989118 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg"] Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.166033 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.166179 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.166223 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert podName:fe6be473-8403-4c9d-abf6-a7a0251326f9 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:26.166209523 +0000 UTC m=+1080.408530180 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" (UID: "fe6be473-8403-4c9d-abf6-a7a0251326f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.470836 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.470961 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.471009 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:26.470996887 +0000 UTC m=+1080.713317544 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "webhook-server-cert" not found Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.471389 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.471520 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.471562 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:26.471550513 +0000 UTC m=+1080.713871170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "metrics-server-cert" not found Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.807047 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" event={"ID":"a9935d62-a205-4294-a124-313a8437c1ab","Type":"ContainerStarted","Data":"50e0bb129a605426f58d228f4f2183b5e11e213c469b62a7dfbe504cbdf16211"} Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.809353 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" podUID="a9935d62-a205-4294-a124-313a8437c1ab" Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.811590 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" event={"ID":"b237a99b-2fe2-4804-880b-03494df684d2","Type":"ContainerStarted","Data":"a90c3dd75136c6b950eb09e0cd64e81d8fcee71fef56b823fa7604476303a128"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.813091 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" event={"ID":"c90ec355-3eb2-43e5-9a39-eed72bb46d1b","Type":"ContainerStarted","Data":"87d39970662241316767e8f4c8a9a0640e7c23164c132b70ea5a876256baad75"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.815057 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" event={"ID":"f5e3f361-0ca8-4a8f-8625-8ea90c292ac2","Type":"ContainerStarted","Data":"3cab287e4d91faaf27ac3e3fbe4bd67d118ac6cbdf5b631f5e4e67cffbec006b"} Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.815502 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" podUID="c90ec355-3eb2-43e5-9a39-eed72bb46d1b" Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.816443 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" podUID="f5e3f361-0ca8-4a8f-8625-8ea90c292ac2" Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.817358 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" event={"ID":"216a4a66-0783-4b6c-9884-370bd3a001a4","Type":"ContainerStarted","Data":"67bac68786122837753b6725d9e195a6167c66f5d22ddf4d4e0dc11b57c31ab0"} Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.820157 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" podUID="216a4a66-0783-4b6c-9884-370bd3a001a4" Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.825775 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" event={"ID":"37408ab3-7514-42a0-92e8-6c2a2710b9f0","Type":"ContainerStarted","Data":"4a30b6db4e7faf7cb72c769ea3a0cda49b479f06b585f17039351ab4051d9867"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.827946 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" event={"ID":"30810ec7-8325-4bde-aa9d-ff905addb474","Type":"ContainerStarted","Data":"4033eb31f2196165985741e1faea98961daa5a6caa345117e93b97864fbaf9a4"} Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.830242 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" podUID="30810ec7-8325-4bde-aa9d-ff905addb474" Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.830468 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" event={"ID":"cf8aae65-a739-4ab3-8208-ae8ac4ed0671","Type":"ContainerStarted","Data":"56faf841a954a1b88accc2ec97346cf38e636acbe3464440614bd544cf4a143a"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.835154 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" event={"ID":"e7498ffc-cb24-44e8-b0cb-4ada46db9e4c","Type":"ContainerStarted","Data":"7fa4c906b31f737049fa714e533c738efc69e46473f7d7983e18867cd1c0c74a"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.837006 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" event={"ID":"8c8044b8-c803-4b5b-916f-34c0c03ab619","Type":"ContainerStarted","Data":"541ddaf3d90f597dfeef9a6079d17355da9bb47eb767260ebc000788e2afb663"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.839896 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" event={"ID":"5863afa6-053e-4d6c-899e-c31dcc30dcf3","Type":"ContainerStarted","Data":"13e7403156717c5893ec42fd0b80990d3d985ba1d53c72c9460c118692b8838a"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.841880 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" event={"ID":"c25839b5-c34e-4865-a5ad-4e10355f1953","Type":"ContainerStarted","Data":"7f9d053399fa952b3afcca61ef473a495ffff14110fc25c82ea193768891a5db"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.853156 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" event={"ID":"3d635198-c21d-4d2e-9393-ad9b6cdf462f","Type":"ContainerStarted","Data":"4e7d00e0a579dc9c95d0eb3ad15bdda10c26de7891021ad6f164aef8e4f25681"} Feb 28 13:34:24 crc kubenswrapper[4897]: I0228 13:34:24.854576 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" event={"ID":"c434bb35-55df-45b5-9eeb-ab9913f3fd5e","Type":"ContainerStarted","Data":"bd20b5cb7bbc269ce9dc9970edf4e59cb9fbd17982f5bbbac2993106ca745187"} Feb 28 13:34:24 crc kubenswrapper[4897]: E0228 13:34:24.856290 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" podUID="c434bb35-55df-45b5-9eeb-ab9913f3fd5e" Feb 28 13:34:25 crc kubenswrapper[4897]: E0228 13:34:25.870577 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" podUID="30810ec7-8325-4bde-aa9d-ff905addb474" Feb 28 13:34:25 crc kubenswrapper[4897]: E0228 13:34:25.871032 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" podUID="a9935d62-a205-4294-a124-313a8437c1ab" Feb 28 13:34:25 crc kubenswrapper[4897]: E0228 13:34:25.871069 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" podUID="f5e3f361-0ca8-4a8f-8625-8ea90c292ac2" Feb 28 13:34:25 crc kubenswrapper[4897]: E0228 13:34:25.871191 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" podUID="c90ec355-3eb2-43e5-9a39-eed72bb46d1b" Feb 28 13:34:25 crc kubenswrapper[4897]: E0228 13:34:25.871223 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" podUID="c434bb35-55df-45b5-9eeb-ab9913f3fd5e" Feb 28 13:34:25 crc kubenswrapper[4897]: E0228 13:34:25.871097 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" podUID="216a4a66-0783-4b6c-9884-370bd3a001a4" Feb 28 13:34:26 crc kubenswrapper[4897]: I0228 13:34:26.011599 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:26 crc kubenswrapper[4897]: E0228 13:34:26.011892 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:26 crc kubenswrapper[4897]: E0228 13:34:26.011977 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert podName:3bfb71f8-fd2c-4730-af54-601ec4daebaf nodeName:}" failed. No retries permitted until 2026-02-28 13:34:30.011955728 +0000 UTC m=+1084.254276375 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert") pod "infra-operator-controller-manager-f7fcc58b9-bb7d9" (UID: "3bfb71f8-fd2c-4730-af54-601ec4daebaf") : secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:26 crc kubenswrapper[4897]: I0228 13:34:26.216095 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:26 crc kubenswrapper[4897]: E0228 13:34:26.216858 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:26 crc kubenswrapper[4897]: E0228 13:34:26.216942 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert podName:fe6be473-8403-4c9d-abf6-a7a0251326f9 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:30.21691824 +0000 UTC m=+1084.459238897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" (UID: "fe6be473-8403-4c9d-abf6-a7a0251326f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:26 crc kubenswrapper[4897]: I0228 13:34:26.522012 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:26 crc kubenswrapper[4897]: I0228 13:34:26.522180 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:26 crc kubenswrapper[4897]: E0228 13:34:26.522204 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 13:34:26 crc kubenswrapper[4897]: E0228 13:34:26.522762 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:30.522739115 +0000 UTC m=+1084.765059772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "webhook-server-cert" not found Feb 28 13:34:26 crc kubenswrapper[4897]: E0228 13:34:26.522567 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 13:34:26 crc kubenswrapper[4897]: E0228 13:34:26.522834 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:30.522812117 +0000 UTC m=+1084.765132774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "metrics-server-cert" not found Feb 28 13:34:27 crc kubenswrapper[4897]: I0228 13:34:27.510583 4897 scope.go:117] "RemoveContainer" containerID="725b8a96bd051a1221ce1b763a307d804053924ba7541f1c192d338920f8a395" Feb 28 13:34:30 crc kubenswrapper[4897]: I0228 13:34:30.077711 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:30 crc kubenswrapper[4897]: E0228 13:34:30.077926 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:30 crc kubenswrapper[4897]: E0228 13:34:30.078413 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert podName:3bfb71f8-fd2c-4730-af54-601ec4daebaf nodeName:}" failed. No retries permitted until 2026-02-28 13:34:38.07839389 +0000 UTC m=+1092.320714547 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert") pod "infra-operator-controller-manager-f7fcc58b9-bb7d9" (UID: "3bfb71f8-fd2c-4730-af54-601ec4daebaf") : secret "infra-operator-webhook-server-cert" not found Feb 28 13:34:30 crc kubenswrapper[4897]: I0228 13:34:30.280128 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:30 crc kubenswrapper[4897]: E0228 13:34:30.280266 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:30 crc kubenswrapper[4897]: E0228 13:34:30.280336 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert podName:fe6be473-8403-4c9d-abf6-a7a0251326f9 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:38.280322833 +0000 UTC m=+1092.522643490 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" (UID: "fe6be473-8403-4c9d-abf6-a7a0251326f9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 13:34:30 crc kubenswrapper[4897]: I0228 13:34:30.585527 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:30 crc kubenswrapper[4897]: I0228 13:34:30.585679 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:30 crc kubenswrapper[4897]: E0228 13:34:30.585688 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 13:34:30 crc kubenswrapper[4897]: E0228 13:34:30.585762 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:38.585744856 +0000 UTC m=+1092.828065513 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "webhook-server-cert" not found Feb 28 13:34:30 crc kubenswrapper[4897]: E0228 13:34:30.585898 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 13:34:30 crc kubenswrapper[4897]: E0228 13:34:30.585968 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:38.585949982 +0000 UTC m=+1092.828270639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "metrics-server-cert" not found Feb 28 13:34:34 crc kubenswrapper[4897]: E0228 13:34:34.864583 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:34:35 crc kubenswrapper[4897]: E0228 13:34:35.467718 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053" Feb 28 13:34:35 crc kubenswrapper[4897]: E0228 13:34:35.468280 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcnvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-cf99c678f-pjmt7_openstack-operators(e7498ffc-cb24-44e8-b0cb-4ada46db9e4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:34:35 crc kubenswrapper[4897]: E0228 13:34:35.469842 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" podUID="e7498ffc-cb24-44e8-b0cb-4ada46db9e4c" Feb 28 13:34:35 crc kubenswrapper[4897]: E0228 13:34:35.937638 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053\\\"\"" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" podUID="e7498ffc-cb24-44e8-b0cb-4ada46db9e4c" Feb 28 13:34:36 crc kubenswrapper[4897]: E0228 13:34:36.359941 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" Feb 28 13:34:36 crc kubenswrapper[4897]: E0228 13:34:36.360167 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24psz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-55d77d7b5c-d8psr_openstack-operators(5ef2847d-3e11-419b-b34c-3f4cb5643af9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:34:36 crc kubenswrapper[4897]: E0228 13:34:36.361385 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" podUID="5ef2847d-3e11-419b-b34c-3f4cb5643af9" Feb 28 13:34:36 crc kubenswrapper[4897]: E0228 13:34:36.944393 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" podUID="5ef2847d-3e11-419b-b34c-3f4cb5643af9" Feb 28 13:34:37 crc kubenswrapper[4897]: E0228 13:34:37.142954 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" Feb 28 13:34:37 crc kubenswrapper[4897]: E0228 13:34:37.143151 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jt7w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-v7qbm_openstack-operators(37408ab3-7514-42a0-92e8-6c2a2710b9f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:34:37 crc kubenswrapper[4897]: E0228 13:34:37.144375 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" podUID="37408ab3-7514-42a0-92e8-6c2a2710b9f0" Feb 28 13:34:37 crc kubenswrapper[4897]: E0228 13:34:37.704737 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" Feb 28 13:34:37 crc kubenswrapper[4897]: E0228 13:34:37.704956 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hq5gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-5d87c9d997-hgfm4_openstack-operators(a78107ef-804f-476a-98f4-195f52927c3d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:34:37 crc kubenswrapper[4897]: E0228 13:34:37.706128 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" podUID="a78107ef-804f-476a-98f4-195f52927c3d" Feb 28 13:34:37 crc kubenswrapper[4897]: E0228 13:34:37.952931 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" podUID="37408ab3-7514-42a0-92e8-6c2a2710b9f0" Feb 28 13:34:37 crc kubenswrapper[4897]: E0228 13:34:37.953856 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214\\\"\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" podUID="a78107ef-804f-476a-98f4-195f52927c3d" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.115581 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.125466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfb71f8-fd2c-4730-af54-601ec4daebaf-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-bb7d9\" (UID: \"3bfb71f8-fd2c-4730-af54-601ec4daebaf\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.317269 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.317504 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnvgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-7lr7s_openstack-operators(3664d59e-945d-4eb5-9443-296e206a1081): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.318673 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.318792 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" podUID="3664d59e-945d-4eb5-9443-296e206a1081" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.322804 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe6be473-8403-4c9d-abf6-a7a0251326f9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd\" (UID: \"fe6be473-8403-4c9d-abf6-a7a0251326f9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.357004 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tqfrd" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.366254 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.467870 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-24z5v" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.476126 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.623020 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:38 crc kubenswrapper[4897]: I0228 13:34:38.623093 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.623125 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.623209 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.623212 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:54.623158434 +0000 UTC m=+1108.865479091 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "webhook-server-cert" not found Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.623246 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs podName:6532860c-c344-4a74-9189-4382f4865b58 nodeName:}" failed. No retries permitted until 2026-02-28 13:34:54.623237446 +0000 UTC m=+1108.865558103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs") pod "openstack-operator-controller-manager-6d8778d855-4x57f" (UID: "6532860c-c344-4a74-9189-4382f4865b58") : secret "metrics-server-cert" not found Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.834505 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.834927 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnksh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9b9ff9f4d-dnpdj_openstack-operators(8c8044b8-c803-4b5b-916f-34c0c03ab619): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.836142 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" podUID="8c8044b8-c803-4b5b-916f-34c0c03ab619" Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.967255 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" podUID="3664d59e-945d-4eb5-9443-296e206a1081" Feb 28 13:34:38 crc kubenswrapper[4897]: E0228 13:34:38.967284 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" podUID="8c8044b8-c803-4b5b-916f-34c0c03ab619" Feb 28 13:34:39 crc kubenswrapper[4897]: I0228 13:34:39.958736 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9"] Feb 28 13:34:40 crc kubenswrapper[4897]: I0228 13:34:40.066493 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd"] Feb 28 13:34:40 crc kubenswrapper[4897]: W0228 13:34:40.223131 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bfb71f8_fd2c_4730_af54_601ec4daebaf.slice/crio-d0f1b145cd4ea61b3dac2f3809ed403c70c9a68a9ed6cdef9850b475117e53d4 WatchSource:0}: Error finding container d0f1b145cd4ea61b3dac2f3809ed403c70c9a68a9ed6cdef9850b475117e53d4: Status 404 returned error can't find the container with id d0f1b145cd4ea61b3dac2f3809ed403c70c9a68a9ed6cdef9850b475117e53d4 Feb 28 13:34:40 crc kubenswrapper[4897]: W0228 13:34:40.223907 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe6be473_8403_4c9d_abf6_a7a0251326f9.slice/crio-348bc2a27b743befa9e651b52e714a70fe086d786320cc4e3d6f443cd7829cc7 WatchSource:0}: Error finding container 348bc2a27b743befa9e651b52e714a70fe086d786320cc4e3d6f443cd7829cc7: Status 404 returned error can't find the container with id 348bc2a27b743befa9e651b52e714a70fe086d786320cc4e3d6f443cd7829cc7 Feb 28 13:34:40 crc kubenswrapper[4897]: I0228 13:34:40.997517 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" event={"ID":"507c84e1-3826-47ad-93f4-c2d6d726f8b7","Type":"ContainerStarted","Data":"8568953684dc992fe3d3651c04e4f62699adc9423920843111d43231ca27b387"} Feb 28 13:34:40 crc kubenswrapper[4897]: I0228 13:34:40.998589 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.011019 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" event={"ID":"cf8aae65-a739-4ab3-8208-ae8ac4ed0671","Type":"ContainerStarted","Data":"151023152cef84a442b17226806b0193a6b74729329c8d9bdb783c2177cda2d0"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.011085 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.033136 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" event={"ID":"5863afa6-053e-4d6c-899e-c31dcc30dcf3","Type":"ContainerStarted","Data":"375391d68b3e3d1ed4c22ece3f5d9ef9c25b57632259045d529946be3fe8c2ce"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.033813 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.060362 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" event={"ID":"c25839b5-c34e-4865-a5ad-4e10355f1953","Type":"ContainerStarted","Data":"3424ce8e754f6dbb8a3b198c580b1df6dfb25222149d35f75aaeb534051ef39b"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.060498 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.072608 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" event={"ID":"c434bb35-55df-45b5-9eeb-ab9913f3fd5e","Type":"ContainerStarted","Data":"cb5ba18932045fd4fb7c3229e3cd941a9c5decc2196f062e594ffa23ce078f3b"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.073350 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.078804 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" podStartSLOduration=2.570018696 podStartE2EDuration="19.078786478s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.036585915 +0000 UTC m=+1077.278906572" lastFinishedPulling="2026-02-28 13:34:39.545353697 +0000 UTC m=+1093.787674354" observedRunningTime="2026-02-28 13:34:41.049262453 +0000 UTC m=+1095.291583110" watchObservedRunningTime="2026-02-28 13:34:41.078786478 +0000 UTC m=+1095.321107125" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.081671 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" podStartSLOduration=4.504797777 podStartE2EDuration="20.081659992s" podCreationTimestamp="2026-02-28 13:34:21 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.956481021 +0000 UTC m=+1078.198801678" lastFinishedPulling="2026-02-28 13:34:39.533343236 +0000 UTC m=+1093.775663893" observedRunningTime="2026-02-28 13:34:41.077716806 +0000 UTC m=+1095.320037483" watchObservedRunningTime="2026-02-28 13:34:41.081659992 +0000 UTC m=+1095.323980649" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.087620 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" event={"ID":"1d330dac-b70b-4af0-bfa0-1fba21022fb1","Type":"ContainerStarted","Data":"8fdda4f61a43d5d9b346d7cbe1e51073dc5132ace1f6ecd323c830a16f1ba94f"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.088517 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.110663 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" podStartSLOduration=3.532692975 podStartE2EDuration="19.110649301s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.950800635 +0000 UTC m=+1078.193121292" lastFinishedPulling="2026-02-28 13:34:39.528756961 +0000 UTC m=+1093.771077618" observedRunningTime="2026-02-28 13:34:41.106228611 +0000 UTC m=+1095.348549268" watchObservedRunningTime="2026-02-28 13:34:41.110649301 +0000 UTC m=+1095.352969958" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.121735 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" event={"ID":"fe6be473-8403-4c9d-abf6-a7a0251326f9","Type":"ContainerStarted","Data":"348bc2a27b743befa9e651b52e714a70fe086d786320cc4e3d6f443cd7829cc7"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.164528 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" event={"ID":"b237a99b-2fe2-4804-880b-03494df684d2","Type":"ContainerStarted","Data":"e5f539b574f27ee35ed86ab3be5c16e4494626995f2538460eaf68dd81de9adb"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.165141 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.177758 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" event={"ID":"30810ec7-8325-4bde-aa9d-ff905addb474","Type":"ContainerStarted","Data":"51e1a5765d111654f6b0bd311eddaaa27f2b549564014be9f02da3d38dd94301"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.178438 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.178815 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" podStartSLOduration=2.903188392 podStartE2EDuration="19.178798446s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.96667918 +0000 UTC m=+1078.208999837" lastFinishedPulling="2026-02-28 13:34:40.242289194 +0000 UTC m=+1094.484609891" observedRunningTime="2026-02-28 13:34:41.167549137 +0000 UTC m=+1095.409869794" watchObservedRunningTime="2026-02-28 13:34:41.178798446 +0000 UTC m=+1095.421119103" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.180806 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" podStartSLOduration=4.22648261 podStartE2EDuration="19.180799015s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.9226444 +0000 UTC m=+1078.164965057" lastFinishedPulling="2026-02-28 13:34:38.876960805 +0000 UTC m=+1093.119281462" observedRunningTime="2026-02-28 13:34:41.146674956 +0000 UTC m=+1095.388995623" watchObservedRunningTime="2026-02-28 13:34:41.180799015 +0000 UTC m=+1095.423119672" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.196729 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" podStartSLOduration=3.918795339 podStartE2EDuration="20.196698821s" podCreationTimestamp="2026-02-28 13:34:21 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.250936712 +0000 UTC m=+1077.493257369" lastFinishedPulling="2026-02-28 13:34:39.528840194 +0000 UTC m=+1093.771160851" observedRunningTime="2026-02-28 13:34:41.189586432 +0000 UTC m=+1095.431907089" watchObservedRunningTime="2026-02-28 13:34:41.196698821 +0000 UTC m=+1095.439019478" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.201344 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" event={"ID":"3bfb71f8-fd2c-4730-af54-601ec4daebaf","Type":"ContainerStarted","Data":"d0f1b145cd4ea61b3dac2f3809ed403c70c9a68a9ed6cdef9850b475117e53d4"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.217055 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" event={"ID":"3d635198-c21d-4d2e-9393-ad9b6cdf462f","Type":"ContainerStarted","Data":"4d9048822879f74cb4232350915ad756f2ffc62987bf7ff44ca3715778920324"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.217824 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.227423 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" podStartSLOduration=3.639813801 podStartE2EDuration="19.22740587s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.940839163 +0000 UTC m=+1078.183159820" lastFinishedPulling="2026-02-28 13:34:39.528431232 +0000 UTC m=+1093.770751889" observedRunningTime="2026-02-28 13:34:41.220286971 +0000 UTC m=+1095.462607628" watchObservedRunningTime="2026-02-28 13:34:41.22740587 +0000 UTC m=+1095.469726527" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.242599 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" event={"ID":"30b14df1-8f3e-427c-b6d9-eb8aeb192213","Type":"ContainerStarted","Data":"b5e1317a219d62eb4ab7db3b1025217255180e9aea3fef01ff4ebd2b32bbcadc"} Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.243368 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.254270 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" podStartSLOduration=2.844229376 podStartE2EDuration="19.254254276s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.966114443 +0000 UTC m=+1078.208435100" lastFinishedPulling="2026-02-28 13:34:40.376139333 +0000 UTC m=+1094.618460000" observedRunningTime="2026-02-28 13:34:41.252453533 +0000 UTC m=+1095.494774200" watchObservedRunningTime="2026-02-28 13:34:41.254254276 +0000 UTC m=+1095.496574933" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.323117 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" podStartSLOduration=2.161192325 podStartE2EDuration="19.323102482s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.057182738 +0000 UTC m=+1077.299503395" lastFinishedPulling="2026-02-28 13:34:40.219092895 +0000 UTC m=+1094.461413552" observedRunningTime="2026-02-28 13:34:41.293556097 +0000 UTC m=+1095.535876754" watchObservedRunningTime="2026-02-28 13:34:41.323102482 +0000 UTC m=+1095.565423139" Feb 28 13:34:41 crc kubenswrapper[4897]: I0228 13:34:41.325922 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" podStartSLOduration=3.731214337 podStartE2EDuration="19.325917564s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.937082383 +0000 UTC m=+1078.179403030" lastFinishedPulling="2026-02-28 13:34:39.5317856 +0000 UTC m=+1093.774106257" observedRunningTime="2026-02-28 13:34:41.324604296 +0000 UTC m=+1095.566924953" watchObservedRunningTime="2026-02-28 13:34:41.325917564 +0000 UTC m=+1095.568238221" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.448086 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-cfsb9" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.483354 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-fm9lk" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.576031 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-96lzs" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.662623 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-4tvzl" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.736051 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-qjg9q" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.813275 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6xfvp" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.853161 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-wrf59" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.898152 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-p6pbb" Feb 28 13:34:52 crc kubenswrapper[4897]: I0228 13:34:52.908631 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-hkkvm" Feb 28 13:34:53 crc kubenswrapper[4897]: I0228 13:34:53.045832 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-69dbd6f547-4ng5q" Feb 28 13:34:54 crc kubenswrapper[4897]: I0228 13:34:54.691859 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:54 crc kubenswrapper[4897]: I0228 13:34:54.692234 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:54 crc kubenswrapper[4897]: I0228 13:34:54.702924 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-metrics-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:54 crc kubenswrapper[4897]: I0228 13:34:54.705617 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6532860c-c344-4a74-9189-4382f4865b58-webhook-certs\") pod \"openstack-operator-controller-manager-6d8778d855-4x57f\" (UID: \"6532860c-c344-4a74-9189-4382f4865b58\") " pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:54 crc kubenswrapper[4897]: I0228 13:34:54.897120 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-btbnz" Feb 28 13:34:54 crc kubenswrapper[4897]: I0228 13:34:54.905736 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:34:59 crc kubenswrapper[4897]: E0228 13:34:59.635215 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6" Feb 28 13:34:59 crc kubenswrapper[4897]: E0228 13:34:59.635799 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjh2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fdb694969-6r8pc_openstack-operators(f5e3f361-0ca8-4a8f-8625-8ea90c292ac2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:34:59 crc kubenswrapper[4897]: E0228 13:34:59.637068 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" podUID="f5e3f361-0ca8-4a8f-8625-8ea90c292ac2" Feb 28 13:35:00 crc kubenswrapper[4897]: E0228 13:35:00.047462 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 28 13:35:00 crc kubenswrapper[4897]: E0228 13:35:00.047730 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rg5f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-w6zhg_openstack-operators(216a4a66-0783-4b6c-9884-370bd3a001a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:35:00 crc kubenswrapper[4897]: E0228 13:35:00.048955 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" podUID="216a4a66-0783-4b6c-9884-370bd3a001a4" Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.416290 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" event={"ID":"c90ec355-3eb2-43e5-9a39-eed72bb46d1b","Type":"ContainerStarted","Data":"e93e6b07a3d179b63fabe648dfe88c29e673da84d6e6ad313b614c7beca75b57"} Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.417124 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.419211 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" event={"ID":"fe6be473-8403-4c9d-abf6-a7a0251326f9","Type":"ContainerStarted","Data":"134dba0b34d9c32bc464cebb88f51b1c5fe0cd52c96c6ca6258707a3f6db6feb"} Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.419386 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.421832 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" event={"ID":"a9935d62-a205-4294-a124-313a8437c1ab","Type":"ContainerStarted","Data":"1ca58468ad82156b26ad3be03db4de8c23a9a84d884ade20fea236b901887fbb"} Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.422138 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.424224 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" event={"ID":"3bfb71f8-fd2c-4730-af54-601ec4daebaf","Type":"ContainerStarted","Data":"ba7ad88a7c26c09eeb95bbbcd42711ce92750708b5f340d7bee84c3cf8a5a7d0"} Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.424545 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.434780 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" podStartSLOduration=2.378451068 podStartE2EDuration="38.43475827s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.967388091 +0000 UTC m=+1078.209708748" lastFinishedPulling="2026-02-28 13:35:00.023695293 +0000 UTC m=+1114.266015950" observedRunningTime="2026-02-28 13:35:00.432872855 +0000 UTC m=+1114.675193522" watchObservedRunningTime="2026-02-28 13:35:00.43475827 +0000 UTC m=+1114.677078937" Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.450882 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" podStartSLOduration=18.642926435 podStartE2EDuration="38.450862721s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:40.227919093 +0000 UTC m=+1094.470239750" lastFinishedPulling="2026-02-28 13:35:00.035855379 +0000 UTC m=+1114.278176036" observedRunningTime="2026-02-28 13:35:00.450492801 +0000 UTC m=+1114.692813468" watchObservedRunningTime="2026-02-28 13:35:00.450862721 +0000 UTC m=+1114.693183388" Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.474666 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" podStartSLOduration=2.417579593 podStartE2EDuration="38.474639588s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.966856795 +0000 UTC m=+1078.209177452" lastFinishedPulling="2026-02-28 13:35:00.02391679 +0000 UTC m=+1114.266237447" observedRunningTime="2026-02-28 13:35:00.473770142 +0000 UTC m=+1114.716090809" watchObservedRunningTime="2026-02-28 13:35:00.474639588 +0000 UTC m=+1114.716960265" Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.509224 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f"] Feb 28 13:35:00 crc kubenswrapper[4897]: I0228 13:35:00.517171 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" podStartSLOduration=18.721056954 podStartE2EDuration="38.517152303s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:40.227604394 +0000 UTC m=+1094.469925091" lastFinishedPulling="2026-02-28 13:35:00.023699783 +0000 UTC m=+1114.266020440" observedRunningTime="2026-02-28 13:35:00.508989413 +0000 UTC m=+1114.751310070" watchObservedRunningTime="2026-02-28 13:35:00.517152303 +0000 UTC m=+1114.759472970" Feb 28 13:35:00 crc kubenswrapper[4897]: W0228 13:35:00.523449 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6532860c_c344_4a74_9189_4382f4865b58.slice/crio-9ca24322cfc7778ef7adc25f57ab483b769bf34f86ee5f10c6dd144158706ad1 WatchSource:0}: Error finding container 9ca24322cfc7778ef7adc25f57ab483b769bf34f86ee5f10c6dd144158706ad1: Status 404 returned error can't find the container with id 9ca24322cfc7778ef7adc25f57ab483b769bf34f86ee5f10c6dd144158706ad1 Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.432587 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" event={"ID":"5ef2847d-3e11-419b-b34c-3f4cb5643af9","Type":"ContainerStarted","Data":"17fe45c935f933764a801c959d597f523e38402efbd1ac88475bb0af57ed1f05"} Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.432849 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.433816 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" event={"ID":"6532860c-c344-4a74-9189-4382f4865b58","Type":"ContainerStarted","Data":"d26aeef231c4bc6e2285b064d12dc100831660776f171bc07713d776f7d94114"} Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.433843 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" event={"ID":"6532860c-c344-4a74-9189-4382f4865b58","Type":"ContainerStarted","Data":"9ca24322cfc7778ef7adc25f57ab483b769bf34f86ee5f10c6dd144158706ad1"} Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.434743 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.435909 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" event={"ID":"37408ab3-7514-42a0-92e8-6c2a2710b9f0","Type":"ContainerStarted","Data":"f2ab2ea0dfc9c4dca3b6a329abee46bebb276c865e9b525887557fa1a3de76c5"} Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.436077 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.436907 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" event={"ID":"8c8044b8-c803-4b5b-916f-34c0c03ab619","Type":"ContainerStarted","Data":"20f4c934323c230f47539d1ef81d2ebfb5cf16d37efa6745a77bcf588ba63c01"} Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.437041 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.437989 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" event={"ID":"e7498ffc-cb24-44e8-b0cb-4ada46db9e4c","Type":"ContainerStarted","Data":"b2be4f05c7b7ebd3b3f186ec5fe7dc40d65ff6104ab501432c616ab9c5679851"} Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.438193 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.439270 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" event={"ID":"a78107ef-804f-476a-98f4-195f52927c3d","Type":"ContainerStarted","Data":"0ac6f9cf3b408a026955df2f276c73487d744f1864ac997db36d727a7eb23208"} Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.439438 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.440210 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" event={"ID":"3664d59e-945d-4eb5-9443-296e206a1081","Type":"ContainerStarted","Data":"14a348f949ffc640548e86d2562e6e4809615bb352c95f71d647526ca291c43e"} Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.456338 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" podStartSLOduration=3.15933016 podStartE2EDuration="40.456324192s" podCreationTimestamp="2026-02-28 13:34:21 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.407775794 +0000 UTC m=+1077.650096451" lastFinishedPulling="2026-02-28 13:35:00.704769826 +0000 UTC m=+1114.947090483" observedRunningTime="2026-02-28 13:35:01.453708016 +0000 UTC m=+1115.696028673" watchObservedRunningTime="2026-02-28 13:35:01.456324192 +0000 UTC m=+1115.698644849" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.475945 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" podStartSLOduration=3.559078335 podStartE2EDuration="40.475925106s" podCreationTimestamp="2026-02-28 13:34:21 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.946330314 +0000 UTC m=+1078.188650971" lastFinishedPulling="2026-02-28 13:35:00.863177085 +0000 UTC m=+1115.105497742" observedRunningTime="2026-02-28 13:35:01.473798214 +0000 UTC m=+1115.716118871" watchObservedRunningTime="2026-02-28 13:35:01.475925106 +0000 UTC m=+1115.718245763" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.485833 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" podStartSLOduration=3.41991196 podStartE2EDuration="40.485820756s" podCreationTimestamp="2026-02-28 13:34:21 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.636999716 +0000 UTC m=+1077.879320373" lastFinishedPulling="2026-02-28 13:35:00.702908502 +0000 UTC m=+1114.945229169" observedRunningTime="2026-02-28 13:35:01.484436905 +0000 UTC m=+1115.726757562" watchObservedRunningTime="2026-02-28 13:35:01.485820756 +0000 UTC m=+1115.728141413" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.503816 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" podStartSLOduration=2.749507011 podStartE2EDuration="39.503798632s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.947388805 +0000 UTC m=+1078.189709462" lastFinishedPulling="2026-02-28 13:35:00.701680426 +0000 UTC m=+1114.944001083" observedRunningTime="2026-02-28 13:35:01.502324439 +0000 UTC m=+1115.744645096" watchObservedRunningTime="2026-02-28 13:35:01.503798632 +0000 UTC m=+1115.746119289" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.520695 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" podStartSLOduration=2.646631549 podStartE2EDuration="39.520680207s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.917078877 +0000 UTC m=+1078.159399534" lastFinishedPulling="2026-02-28 13:35:00.791127535 +0000 UTC m=+1115.033448192" observedRunningTime="2026-02-28 13:35:01.51838933 +0000 UTC m=+1115.760709987" watchObservedRunningTime="2026-02-28 13:35:01.520680207 +0000 UTC m=+1115.763000864" Feb 28 13:35:01 crc kubenswrapper[4897]: I0228 13:35:01.554078 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" podStartSLOduration=39.554060254 podStartE2EDuration="39.554060254s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:35:01.546806862 +0000 UTC m=+1115.789127519" watchObservedRunningTime="2026-02-28 13:35:01.554060254 +0000 UTC m=+1115.796380911" Feb 28 13:35:02 crc kubenswrapper[4897]: I0228 13:35:02.584983 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" Feb 28 13:35:08 crc kubenswrapper[4897]: I0228 13:35:08.372164 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-bb7d9" Feb 28 13:35:08 crc kubenswrapper[4897]: I0228 13:35:08.397860 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" podStartSLOduration=9.32629013 podStartE2EDuration="46.397839341s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.631300799 +0000 UTC m=+1077.873621456" lastFinishedPulling="2026-02-28 13:35:00.70285001 +0000 UTC m=+1114.945170667" observedRunningTime="2026-02-28 13:35:01.57544459 +0000 UTC m=+1115.817765247" watchObservedRunningTime="2026-02-28 13:35:08.397839341 +0000 UTC m=+1122.640159998" Feb 28 13:35:08 crc kubenswrapper[4897]: I0228 13:35:08.482862 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd" Feb 28 13:35:12 crc kubenswrapper[4897]: I0228 13:35:12.586264 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-d8psr" Feb 28 13:35:12 crc kubenswrapper[4897]: I0228 13:35:12.587128 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-7lr7s" Feb 28 13:35:12 crc kubenswrapper[4897]: I0228 13:35:12.652692 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-hgfm4" Feb 28 13:35:12 crc kubenswrapper[4897]: I0228 13:35:12.653131 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-wqgwr" Feb 28 13:35:12 crc kubenswrapper[4897]: I0228 13:35:12.691194 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pjmt7" Feb 28 13:35:12 crc kubenswrapper[4897]: I0228 13:35:12.798804 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-h65l6" Feb 28 13:35:12 crc kubenswrapper[4897]: I0228 13:35:12.975054 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-dnpdj" Feb 28 13:35:13 crc kubenswrapper[4897]: I0228 13:35:13.031585 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-v7qbm" Feb 28 13:35:13 crc kubenswrapper[4897]: E0228 13:35:13.458940 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" podUID="216a4a66-0783-4b6c-9884-370bd3a001a4" Feb 28 13:35:14 crc kubenswrapper[4897]: E0228 13:35:14.459126 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" podUID="f5e3f361-0ca8-4a8f-8625-8ea90c292ac2" Feb 28 13:35:14 crc kubenswrapper[4897]: I0228 13:35:14.912229 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6d8778d855-4x57f" Feb 28 13:35:25 crc kubenswrapper[4897]: I0228 13:35:25.633093 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" event={"ID":"216a4a66-0783-4b6c-9884-370bd3a001a4","Type":"ContainerStarted","Data":"19f389a85021b19711f3f7a604615b95d3f6331ae76f4248610b87d3bf80fdbc"} Feb 28 13:35:25 crc kubenswrapper[4897]: I0228 13:35:25.648871 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w6zhg" podStartSLOduration=2.675277078 podStartE2EDuration="1m3.648852505s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.97181653 +0000 UTC m=+1078.214137187" lastFinishedPulling="2026-02-28 13:35:24.945391947 +0000 UTC m=+1139.187712614" observedRunningTime="2026-02-28 13:35:25.647109254 +0000 UTC m=+1139.889429911" watchObservedRunningTime="2026-02-28 13:35:25.648852505 +0000 UTC m=+1139.891173162" Feb 28 13:35:27 crc kubenswrapper[4897]: I0228 13:35:27.648226 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" event={"ID":"f5e3f361-0ca8-4a8f-8625-8ea90c292ac2","Type":"ContainerStarted","Data":"eb1df007d015a4c5e11b574d4b9fa3669d64bc5334951bdb4376e8290f7b1cb0"} Feb 28 13:35:27 crc kubenswrapper[4897]: I0228 13:35:27.648877 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" Feb 28 13:35:27 crc kubenswrapper[4897]: I0228 13:35:27.671417 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" podStartSLOduration=2.7395253200000003 podStartE2EDuration="1m5.671402098s" podCreationTimestamp="2026-02-28 13:34:22 +0000 UTC" firstStartedPulling="2026-02-28 13:34:23.983926265 +0000 UTC m=+1078.226246922" lastFinishedPulling="2026-02-28 13:35:26.915803023 +0000 UTC m=+1141.158123700" observedRunningTime="2026-02-28 13:35:27.667153714 +0000 UTC m=+1141.909474381" watchObservedRunningTime="2026-02-28 13:35:27.671402098 +0000 UTC m=+1141.913722755" Feb 28 13:35:32 crc kubenswrapper[4897]: I0228 13:35:32.706401 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-6r8pc" Feb 28 13:35:33 crc kubenswrapper[4897]: I0228 13:35:33.371770 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:35:33 crc kubenswrapper[4897]: I0228 13:35:33.372561 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.826672 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-m7448"] Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.828332 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.838383 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.838617 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.838842 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-z87bd" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.839049 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.844367 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-m7448"] Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.896160 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxnp\" (UniqueName: \"kubernetes.io/projected/b338c6b2-22f6-456c-8697-ca754996cd0d-kube-api-access-9sxnp\") pod \"dnsmasq-dns-78ff9dfd65-m7448\" (UID: \"b338c6b2-22f6-456c-8697-ca754996cd0d\") " pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.896220 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b338c6b2-22f6-456c-8697-ca754996cd0d-config\") pod \"dnsmasq-dns-78ff9dfd65-m7448\" (UID: \"b338c6b2-22f6-456c-8697-ca754996cd0d\") " pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.901666 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-vqtfc"] Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.903015 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.907621 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.917214 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-vqtfc"] Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.997123 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sxnp\" (UniqueName: \"kubernetes.io/projected/b338c6b2-22f6-456c-8697-ca754996cd0d-kube-api-access-9sxnp\") pod \"dnsmasq-dns-78ff9dfd65-m7448\" (UID: \"b338c6b2-22f6-456c-8697-ca754996cd0d\") " pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.997189 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b338c6b2-22f6-456c-8697-ca754996cd0d-config\") pod \"dnsmasq-dns-78ff9dfd65-m7448\" (UID: \"b338c6b2-22f6-456c-8697-ca754996cd0d\") " pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.997283 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.997351 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njd6q\" (UniqueName: \"kubernetes.io/projected/fdd35241-336f-4314-b048-5a046957eadf-kube-api-access-njd6q\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.997392 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-config\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:51 crc kubenswrapper[4897]: I0228 13:35:51.998081 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b338c6b2-22f6-456c-8697-ca754996cd0d-config\") pod \"dnsmasq-dns-78ff9dfd65-m7448\" (UID: \"b338c6b2-22f6-456c-8697-ca754996cd0d\") " pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.020008 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sxnp\" (UniqueName: \"kubernetes.io/projected/b338c6b2-22f6-456c-8697-ca754996cd0d-kube-api-access-9sxnp\") pod \"dnsmasq-dns-78ff9dfd65-m7448\" (UID: \"b338c6b2-22f6-456c-8697-ca754996cd0d\") " pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.098846 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-config\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.098956 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.098997 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njd6q\" (UniqueName: \"kubernetes.io/projected/fdd35241-336f-4314-b048-5a046957eadf-kube-api-access-njd6q\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.099729 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-config\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.099761 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.118509 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njd6q\" (UniqueName: \"kubernetes.io/projected/fdd35241-336f-4314-b048-5a046957eadf-kube-api-access-njd6q\") pod \"dnsmasq-dns-574fdb7f99-vqtfc\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.153157 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.220589 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.594868 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-m7448"] Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.689435 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-vqtfc"] Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.848006 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" event={"ID":"b338c6b2-22f6-456c-8697-ca754996cd0d","Type":"ContainerStarted","Data":"07147d418d3671234529eb5800e77d43b4718d027a67a68ae6100b45c392b902"} Feb 28 13:35:52 crc kubenswrapper[4897]: I0228 13:35:52.849511 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" event={"ID":"fdd35241-336f-4314-b048-5a046957eadf","Type":"ContainerStarted","Data":"689c1b5a0002098fcbae6587f4e0a405d5fe7b87f0840637507aa0cd146c33a4"} Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.311076 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-vqtfc"] Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.346836 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-fvv8t"] Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.348069 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.379295 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-fvv8t"] Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.452817 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-config\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.452912 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78lj\" (UniqueName: \"kubernetes.io/projected/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-kube-api-access-n78lj\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.452958 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-dns-svc\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.554690 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78lj\" (UniqueName: \"kubernetes.io/projected/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-kube-api-access-n78lj\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.556439 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-dns-svc\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.557151 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-dns-svc\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.557248 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-config\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.558169 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-config\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.572915 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78lj\" (UniqueName: \"kubernetes.io/projected/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-kube-api-access-n78lj\") pod \"dnsmasq-dns-6f49bcf4c9-fvv8t\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.616056 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-m7448"] Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.653972 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-mck2z"] Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.655130 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.680252 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-mck2z"] Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.691893 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.761012 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks8l9\" (UniqueName: \"kubernetes.io/projected/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-kube-api-access-ks8l9\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.761070 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-dns-svc\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.761166 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-config\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.862517 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks8l9\" (UniqueName: \"kubernetes.io/projected/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-kube-api-access-ks8l9\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.862873 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-dns-svc\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.862992 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-config\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.863982 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-dns-svc\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.863982 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-config\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.892275 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-mck2z"] Feb 28 13:35:55 crc kubenswrapper[4897]: E0228 13:35:55.892740 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ks8l9], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5877d9b675-mck2z" podUID="2a153a7b-9a0b-43c0-a8a7-dc1aea952c28" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.902675 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks8l9\" (UniqueName: \"kubernetes.io/projected/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-kube-api-access-ks8l9\") pod \"dnsmasq-dns-5877d9b675-mck2z\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.907024 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-vz298"] Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.909509 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.944939 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-vz298"] Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.965122 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw5wn\" (UniqueName: \"kubernetes.io/projected/bdcbdd69-5241-4875-aceb-401d47d6fad5-kube-api-access-rw5wn\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.968101 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-dns-svc\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:55 crc kubenswrapper[4897]: I0228 13:35:55.968148 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-config\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.069356 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw5wn\" (UniqueName: \"kubernetes.io/projected/bdcbdd69-5241-4875-aceb-401d47d6fad5-kube-api-access-rw5wn\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.069818 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-dns-svc\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.070869 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-config\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.070793 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-dns-svc\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.071491 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-config\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.085541 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw5wn\" (UniqueName: \"kubernetes.io/projected/bdcbdd69-5241-4875-aceb-401d47d6fad5-kube-api-access-rw5wn\") pod \"dnsmasq-dns-7587d7df99-vz298\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.248828 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.499041 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.500194 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.505018 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.506814 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.509002 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.509066 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.509299 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.509400 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pnfj4" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.512146 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.519158 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579563 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-config-data\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579602 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579663 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a792d6c-3a28-4775-87bf-b099ea550a00-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579715 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579734 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a792d6c-3a28-4775-87bf-b099ea550a00-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579770 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579799 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579814 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vbj\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-kube-api-access-j9vbj\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579839 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579876 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.579898 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.681623 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a792d6c-3a28-4775-87bf-b099ea550a00-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.681706 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.681726 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a792d6c-3a28-4775-87bf-b099ea550a00-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682036 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682161 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682182 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682203 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9vbj\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-kube-api-access-j9vbj\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682219 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682267 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682301 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.682341 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-config-data\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.683006 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.683149 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.683394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.683694 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-config-data\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.684591 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.686804 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.687040 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a792d6c-3a28-4775-87bf-b099ea550a00-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.688159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a792d6c-3a28-4775-87bf-b099ea550a00-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.694898 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.706361 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9vbj\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-kube-api-access-j9vbj\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.709986 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.776116 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.777174 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.779754 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.780239 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.780428 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.780587 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-t2zvl" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.780767 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.780921 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.781098 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.828322 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.835563 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.884909 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.884965 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885000 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxsd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-kube-api-access-rwxsd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885028 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885050 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885076 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885281 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885319 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885344 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885380 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.885398 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.893167 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.900928 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.986753 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks8l9\" (UniqueName: \"kubernetes.io/projected/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-kube-api-access-ks8l9\") pod \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.986810 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-config\") pod \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.986919 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-dns-svc\") pod \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\" (UID: \"2a153a7b-9a0b-43c0-a8a7-dc1aea952c28\") " Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987079 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwxsd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-kube-api-access-rwxsd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987117 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987143 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987169 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987187 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987208 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987226 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987257 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987276 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987323 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.987345 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.988047 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.988524 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a153a7b-9a0b-43c0-a8a7-dc1aea952c28" (UID: "2a153a7b-9a0b-43c0-a8a7-dc1aea952c28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.988661 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.989297 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.989835 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.990146 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.990400 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-config" (OuterVolumeSpecName: "config") pod "2a153a7b-9a0b-43c0-a8a7-dc1aea952c28" (UID: "2a153a7b-9a0b-43c0-a8a7-dc1aea952c28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.991266 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.992734 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.993880 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-kube-api-access-ks8l9" (OuterVolumeSpecName: "kube-api-access-ks8l9") pod "2a153a7b-9a0b-43c0-a8a7-dc1aea952c28" (UID: "2a153a7b-9a0b-43c0-a8a7-dc1aea952c28"). InnerVolumeSpecName "kube-api-access-ks8l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.994962 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:56 crc kubenswrapper[4897]: I0228 13:35:56.995835 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.006553 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.006784 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.008392 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwxsd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-kube-api-access-rwxsd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.043521 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.044678 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.047800 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-mlqgh" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.048039 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.048160 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.048265 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.048467 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.048598 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.048732 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.066911 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.089842 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pp5v\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-kube-api-access-5pp5v\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.090421 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.090776 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.091067 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.091393 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.091750 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48885530-3df1-42cf-9c7f-2f86a21026a9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.092073 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.092340 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.092621 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48885530-3df1-42cf-9c7f-2f86a21026a9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.092873 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.093051 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.093524 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks8l9\" (UniqueName: \"kubernetes.io/projected/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-kube-api-access-ks8l9\") on node \"crc\" DevicePath \"\"" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.093728 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.093902 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.149064 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195544 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195627 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195696 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195748 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48885530-3df1-42cf-9c7f-2f86a21026a9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195786 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195827 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195858 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48885530-3df1-42cf-9c7f-2f86a21026a9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195896 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195905 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.195935 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.196012 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pp5v\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-kube-api-access-5pp5v\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.196078 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.196076 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.197704 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.199498 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.199501 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.201986 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48885530-3df1-42cf-9c7f-2f86a21026a9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.202937 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.205337 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48885530-3df1-42cf-9c7f-2f86a21026a9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.205922 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.208418 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48885530-3df1-42cf-9c7f-2f86a21026a9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.224885 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.231394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pp5v\" (UniqueName: \"kubernetes.io/projected/48885530-3df1-42cf-9c7f-2f86a21026a9-kube-api-access-5pp5v\") pod \"notifications-rabbitmq-server-0\" (UID: \"48885530-3df1-42cf-9c7f-2f86a21026a9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.377800 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.899370 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5877d9b675-mck2z" Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.953285 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-mck2z"] Feb 28 13:35:57 crc kubenswrapper[4897]: I0228 13:35:57.959688 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-mck2z"] Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.467231 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a153a7b-9a0b-43c0-a8a7-dc1aea952c28" path="/var/lib/kubelet/pods/2a153a7b-9a0b-43c0-a8a7-dc1aea952c28/volumes" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.607561 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.608716 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.611302 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.611835 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-dgr4v" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.612157 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.612204 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.628572 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.631379 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.718540 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.718586 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.718619 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db99e06f-c263-4aef-b5c2-330eaed29fd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.718645 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db99e06f-c263-4aef-b5c2-330eaed29fd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.718662 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.718682 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkjf\" (UniqueName: \"kubernetes.io/projected/db99e06f-c263-4aef-b5c2-330eaed29fd4-kube-api-access-vvkjf\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.718711 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.718744 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db99e06f-c263-4aef-b5c2-330eaed29fd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.819944 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.819997 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.820036 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db99e06f-c263-4aef-b5c2-330eaed29fd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.820062 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db99e06f-c263-4aef-b5c2-330eaed29fd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.820080 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.820104 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvkjf\" (UniqueName: \"kubernetes.io/projected/db99e06f-c263-4aef-b5c2-330eaed29fd4-kube-api-access-vvkjf\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.820129 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.820165 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db99e06f-c263-4aef-b5c2-330eaed29fd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.820170 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.821090 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.824225 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db99e06f-c263-4aef-b5c2-330eaed29fd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.824849 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db99e06f-c263-4aef-b5c2-330eaed29fd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.825057 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.827489 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db99e06f-c263-4aef-b5c2-330eaed29fd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.843496 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db99e06f-c263-4aef-b5c2-330eaed29fd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.845167 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvkjf\" (UniqueName: \"kubernetes.io/projected/db99e06f-c263-4aef-b5c2-330eaed29fd4-kube-api-access-vvkjf\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.847376 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"db99e06f-c263-4aef-b5c2-330eaed29fd4\") " pod="openstack/openstack-galera-0" Feb 28 13:35:58 crc kubenswrapper[4897]: I0228 13:35:58.945767 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.048325 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.050058 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.051690 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-djd69" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.052639 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.052942 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.058786 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.064554 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.143176 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f297ea-652d-47ae-9831-fad10c6127ad-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.143243 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.143271 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f297ea-652d-47ae-9831-fad10c6127ad-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.143310 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d7f297ea-652d-47ae-9831-fad10c6127ad-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.143345 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.143365 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.143389 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmtm\" (UniqueName: \"kubernetes.io/projected/d7f297ea-652d-47ae-9831-fad10c6127ad-kube-api-access-xsmtm\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.143424 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.146634 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538096-ws9qt"] Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.152137 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.154426 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.154580 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.154770 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.156017 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538096-ws9qt"] Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247280 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d7f297ea-652d-47ae-9831-fad10c6127ad-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247409 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247491 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqr7p\" (UniqueName: \"kubernetes.io/projected/6e94c0b2-21a6-496c-8188-dfcaf0d66b2b-kube-api-access-rqr7p\") pod \"auto-csr-approver-29538096-ws9qt\" (UID: \"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b\") " pod="openshift-infra/auto-csr-approver-29538096-ws9qt" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247526 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsmtm\" (UniqueName: \"kubernetes.io/projected/d7f297ea-652d-47ae-9831-fad10c6127ad-kube-api-access-xsmtm\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247609 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247673 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f297ea-652d-47ae-9831-fad10c6127ad-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247761 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.247813 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f297ea-652d-47ae-9831-fad10c6127ad-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.252226 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f297ea-652d-47ae-9831-fad10c6127ad-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.252589 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d7f297ea-652d-47ae-9831-fad10c6127ad-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.253121 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.253842 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.255427 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7f297ea-652d-47ae-9831-fad10c6127ad-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.262910 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f297ea-652d-47ae-9831-fad10c6127ad-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.263686 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.283045 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsmtm\" (UniqueName: \"kubernetes.io/projected/d7f297ea-652d-47ae-9831-fad10c6127ad-kube-api-access-xsmtm\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.300357 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d7f297ea-652d-47ae-9831-fad10c6127ad\") " pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.300851 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.302712 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.304113 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-txqwq" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.304910 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.305045 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.323687 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.348629 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f032f5e9-4992-4586-bd47-0c3da76ecf40-config-data\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.348855 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f032f5e9-4992-4586-bd47-0c3da76ecf40-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.348972 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsw72\" (UniqueName: \"kubernetes.io/projected/f032f5e9-4992-4586-bd47-0c3da76ecf40-kube-api-access-vsw72\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.349099 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqr7p\" (UniqueName: \"kubernetes.io/projected/6e94c0b2-21a6-496c-8188-dfcaf0d66b2b-kube-api-access-rqr7p\") pod \"auto-csr-approver-29538096-ws9qt\" (UID: \"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b\") " pod="openshift-infra/auto-csr-approver-29538096-ws9qt" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.349191 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f032f5e9-4992-4586-bd47-0c3da76ecf40-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.349323 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f032f5e9-4992-4586-bd47-0c3da76ecf40-kolla-config\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.374119 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqr7p\" (UniqueName: \"kubernetes.io/projected/6e94c0b2-21a6-496c-8188-dfcaf0d66b2b-kube-api-access-rqr7p\") pod \"auto-csr-approver-29538096-ws9qt\" (UID: \"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b\") " pod="openshift-infra/auto-csr-approver-29538096-ws9qt" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.435776 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.450860 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f032f5e9-4992-4586-bd47-0c3da76ecf40-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.450962 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f032f5e9-4992-4586-bd47-0c3da76ecf40-kolla-config\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.451029 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f032f5e9-4992-4586-bd47-0c3da76ecf40-config-data\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.451055 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f032f5e9-4992-4586-bd47-0c3da76ecf40-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.451080 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsw72\" (UniqueName: \"kubernetes.io/projected/f032f5e9-4992-4586-bd47-0c3da76ecf40-kube-api-access-vsw72\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.452239 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f032f5e9-4992-4586-bd47-0c3da76ecf40-config-data\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.452538 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f032f5e9-4992-4586-bd47-0c3da76ecf40-kolla-config\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.454846 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f032f5e9-4992-4586-bd47-0c3da76ecf40-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.455266 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f032f5e9-4992-4586-bd47-0c3da76ecf40-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.478732 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.483915 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsw72\" (UniqueName: \"kubernetes.io/projected/f032f5e9-4992-4586-bd47-0c3da76ecf40-kube-api-access-vsw72\") pod \"memcached-0\" (UID: \"f032f5e9-4992-4586-bd47-0c3da76ecf40\") " pod="openstack/memcached-0" Feb 28 13:36:00 crc kubenswrapper[4897]: I0228 13:36:00.656222 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 28 13:36:02 crc kubenswrapper[4897]: I0228 13:36:02.478908 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:36:02 crc kubenswrapper[4897]: I0228 13:36:02.480455 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 13:36:02 crc kubenswrapper[4897]: I0228 13:36:02.482270 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-k7s2t" Feb 28 13:36:02 crc kubenswrapper[4897]: I0228 13:36:02.488277 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:36:02 crc kubenswrapper[4897]: I0228 13:36:02.684200 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sf7p\" (UniqueName: \"kubernetes.io/projected/631372ff-9a4e-4110-9ff4-aad528049a06-kube-api-access-8sf7p\") pod \"kube-state-metrics-0\" (UID: \"631372ff-9a4e-4110-9ff4-aad528049a06\") " pod="openstack/kube-state-metrics-0" Feb 28 13:36:02 crc kubenswrapper[4897]: I0228 13:36:02.787148 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sf7p\" (UniqueName: \"kubernetes.io/projected/631372ff-9a4e-4110-9ff4-aad528049a06-kube-api-access-8sf7p\") pod \"kube-state-metrics-0\" (UID: \"631372ff-9a4e-4110-9ff4-aad528049a06\") " pod="openstack/kube-state-metrics-0" Feb 28 13:36:02 crc kubenswrapper[4897]: I0228 13:36:02.834812 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sf7p\" (UniqueName: \"kubernetes.io/projected/631372ff-9a4e-4110-9ff4-aad528049a06-kube-api-access-8sf7p\") pod \"kube-state-metrics-0\" (UID: \"631372ff-9a4e-4110-9ff4-aad528049a06\") " pod="openstack/kube-state-metrics-0" Feb 28 13:36:02 crc kubenswrapper[4897]: I0228 13:36:02.854825 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.371583 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.371643 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.870640 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.872788 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.875131 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.875179 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.875200 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.875419 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.875553 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.875575 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-6zn4s" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.875631 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.919789 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 28 13:36:03 crc kubenswrapper[4897]: I0228 13:36:03.922520 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.017774 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.017834 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.017862 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.017883 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.017913 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.018118 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.018261 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.018417 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.018445 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.018475 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr856\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-kube-api-access-fr856\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119709 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119758 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119777 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr856\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-kube-api-access-fr856\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119803 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119830 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119850 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119872 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119903 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.119944 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.120056 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.121181 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.121201 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.121201 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.124995 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.125220 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.125742 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.126024 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.126249 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.135879 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.135928 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de9fbdfeb629ec9e72fb17ffcc3a651e10bfb0662587d0069f50b747406f5447/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.137852 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr856\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-kube-api-access-fr856\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.165249 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:04 crc kubenswrapper[4897]: I0228 13:36:04.227657 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.140853 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jsdwb"] Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.141781 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.143855 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.144610 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qwtln" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.144654 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.158421 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jsdwb"] Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.196467 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.198205 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.200587 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.200812 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.200840 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mhb64" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.200872 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.201030 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.204427 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ch9bl"] Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.206013 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.214095 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.221043 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ch9bl"] Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.260281 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmltt\" (UniqueName: \"kubernetes.io/projected/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-kube-api-access-rmltt\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.260375 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-run-ovn\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.260402 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-log-ovn\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.260433 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-ovn-controller-tls-certs\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.260454 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-combined-ca-bundle\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.260475 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-scripts\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.260592 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-run\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.361994 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362038 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmkt\" (UniqueName: \"kubernetes.io/projected/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-kube-api-access-4nmkt\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-run\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362125 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-run\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362148 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89q4b\" (UniqueName: \"kubernetes.io/projected/995bc563-52dc-4755-b43f-96a2746d8bce-kube-api-access-89q4b\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362196 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362215 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-log\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362232 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-config\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362252 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362292 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmltt\" (UniqueName: \"kubernetes.io/projected/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-kube-api-access-rmltt\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362464 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-etc-ovs\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362508 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/995bc563-52dc-4755-b43f-96a2746d8bce-scripts\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362567 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-run-ovn\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362597 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-log-ovn\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362644 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-ovn-controller-tls-certs\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362676 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-combined-ca-bundle\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362704 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362730 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362751 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-scripts\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362793 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-run\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362872 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-lib\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362869 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-log-ovn\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362895 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.362899 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-var-run-ovn\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.365068 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-scripts\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.367238 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-ovn-controller-tls-certs\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.374930 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-combined-ca-bundle\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.375459 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmltt\" (UniqueName: \"kubernetes.io/projected/cd2fa5a5-caab-4d3d-8324-f6107d50f59f-kube-api-access-rmltt\") pod \"ovn-controller-jsdwb\" (UID: \"cd2fa5a5-caab-4d3d-8324-f6107d50f59f\") " pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.461732 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.463969 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nmkt\" (UniqueName: \"kubernetes.io/projected/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-kube-api-access-4nmkt\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464016 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-run\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464056 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89q4b\" (UniqueName: \"kubernetes.io/projected/995bc563-52dc-4755-b43f-96a2746d8bce-kube-api-access-89q4b\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464097 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464123 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-log\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464147 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-config\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464225 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-etc-ovs\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464247 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/995bc563-52dc-4755-b43f-96a2746d8bce-scripts\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464293 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464333 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464381 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464405 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-lib\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464445 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464706 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464288 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-log\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.464244 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-run\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.465083 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-config\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.465228 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-etc-ovs\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.465570 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.465679 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.466496 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/995bc563-52dc-4755-b43f-96a2746d8bce-var-lib\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.468090 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/995bc563-52dc-4755-b43f-96a2746d8bce-scripts\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.468560 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.470060 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.480037 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.486922 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89q4b\" (UniqueName: \"kubernetes.io/projected/995bc563-52dc-4755-b43f-96a2746d8bce-kube-api-access-89q4b\") pod \"ovn-controller-ovs-ch9bl\" (UID: \"995bc563-52dc-4755-b43f-96a2746d8bce\") " pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.496460 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nmkt\" (UniqueName: \"kubernetes.io/projected/03ffdd06-e63d-4a43-96f0-92e2d0e3a89d-kube-api-access-4nmkt\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.497330 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d\") " pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.517517 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:06 crc kubenswrapper[4897]: I0228 13:36:06.530635 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:09 crc kubenswrapper[4897]: I0228 13:36:09.893573 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 13:36:09 crc kubenswrapper[4897]: I0228 13:36:09.895494 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:09 crc kubenswrapper[4897]: I0228 13:36:09.897527 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-54rbn" Feb 28 13:36:09 crc kubenswrapper[4897]: I0228 13:36:09.897585 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 28 13:36:09 crc kubenswrapper[4897]: I0228 13:36:09.897526 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 28 13:36:09 crc kubenswrapper[4897]: I0228 13:36:09.897860 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 28 13:36:09 crc kubenswrapper[4897]: I0228 13:36:09.918956 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.047365 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.047413 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.047463 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fwzx\" (UniqueName: \"kubernetes.io/projected/48d78132-b30d-4c29-8137-7af1597f8cc6-kube-api-access-9fwzx\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.047519 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48d78132-b30d-4c29-8137-7af1597f8cc6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.047537 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.047707 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48d78132-b30d-4c29-8137-7af1597f8cc6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.047780 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48d78132-b30d-4c29-8137-7af1597f8cc6-config\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.047846 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.153002 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48d78132-b30d-4c29-8137-7af1597f8cc6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.153075 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48d78132-b30d-4c29-8137-7af1597f8cc6-config\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.153132 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.153177 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.153213 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.153268 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fwzx\" (UniqueName: \"kubernetes.io/projected/48d78132-b30d-4c29-8137-7af1597f8cc6-kube-api-access-9fwzx\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.153410 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48d78132-b30d-4c29-8137-7af1597f8cc6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.153447 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.154096 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48d78132-b30d-4c29-8137-7af1597f8cc6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.154415 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.155432 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48d78132-b30d-4c29-8137-7af1597f8cc6-config\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.156197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48d78132-b30d-4c29-8137-7af1597f8cc6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.191475 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.206412 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fwzx\" (UniqueName: \"kubernetes.io/projected/48d78132-b30d-4c29-8137-7af1597f8cc6-kube-api-access-9fwzx\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.216254 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.219709 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48d78132-b30d-4c29-8137-7af1597f8cc6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.304671 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"48d78132-b30d-4c29-8137-7af1597f8cc6\") " pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:10 crc kubenswrapper[4897]: I0228 13:36:10.527016 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:13 crc kubenswrapper[4897]: E0228 13:36:13.224119 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 28 13:36:13 crc kubenswrapper[4897]: E0228 13:36:13.224703 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 28 13:36:13 crc kubenswrapper[4897]: E0228 13:36:13.224821 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njd6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-574fdb7f99-vqtfc_openstack(fdd35241-336f-4314-b048-5a046957eadf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:36:13 crc kubenswrapper[4897]: E0228 13:36:13.225937 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" podUID="fdd35241-336f-4314-b048-5a046957eadf" Feb 28 13:36:13 crc kubenswrapper[4897]: E0228 13:36:13.267553 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 28 13:36:13 crc kubenswrapper[4897]: E0228 13:36:13.267612 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 28 13:36:13 crc kubenswrapper[4897]: E0228 13:36:13.267719 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9sxnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78ff9dfd65-m7448_openstack(b338c6b2-22f6-456c-8697-ca754996cd0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:36:13 crc kubenswrapper[4897]: E0228 13:36:13.268854 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" podUID="b338c6b2-22f6-456c-8697-ca754996cd0d" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.145891 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 28 13:36:14 crc kubenswrapper[4897]: W0228 13:36:14.148877 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48885530_3df1_42cf_9c7f_2f86a21026a9.slice/crio-f8f1b34fc827fca19c2fb7046114beb39caf101a005540eb642bb2e78aad3627 WatchSource:0}: Error finding container f8f1b34fc827fca19c2fb7046114beb39caf101a005540eb642bb2e78aad3627: Status 404 returned error can't find the container with id f8f1b34fc827fca19c2fb7046114beb39caf101a005540eb642bb2e78aad3627 Feb 28 13:36:14 crc kubenswrapper[4897]: W0228 13:36:14.156083 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bf46d42_2d7e_410d_8a74_1ce12bb280b2.slice/crio-46ab728f52359146111b9eafd7e39ce4b85351f723695b66b4128f8d614e9490 WatchSource:0}: Error finding container 46ab728f52359146111b9eafd7e39ce4b85351f723695b66b4128f8d614e9490: Status 404 returned error can't find the container with id 46ab728f52359146111b9eafd7e39ce4b85351f723695b66b4128f8d614e9490 Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.157294 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.158095 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.264179 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-fvv8t"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.321294 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.325139 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 28 13:36:14 crc kubenswrapper[4897]: W0228 13:36:14.338041 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a792d6c_3a28_4775_87bf_b099ea550a00.slice/crio-b67c44951723863ac485ae3266e9f28aa96512781926061443d37a2024246083 WatchSource:0}: Error finding container b67c44951723863ac485ae3266e9f28aa96512781926061443d37a2024246083: Status 404 returned error can't find the container with id b67c44951723863ac485ae3266e9f28aa96512781926061443d37a2024246083 Feb 28 13:36:14 crc kubenswrapper[4897]: W0228 13:36:14.347715 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb99e06f_c263_4aef_b5c2_330eaed29fd4.slice/crio-591a73309f3dfb473a128a4b4ad1ae5a3f705e4cf1b3f36463f3de86d2117550 WatchSource:0}: Error finding container 591a73309f3dfb473a128a4b4ad1ae5a3f705e4cf1b3f36463f3de86d2117550: Status 404 returned error can't find the container with id 591a73309f3dfb473a128a4b4ad1ae5a3f705e4cf1b3f36463f3de86d2117550 Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.536460 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.550390 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538096-ws9qt"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.554556 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.556701 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.606627 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jsdwb"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.647415 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b338c6b2-22f6-456c-8697-ca754996cd0d-config\") pod \"b338c6b2-22f6-456c-8697-ca754996cd0d\" (UID: \"b338c6b2-22f6-456c-8697-ca754996cd0d\") " Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.647500 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sxnp\" (UniqueName: \"kubernetes.io/projected/b338c6b2-22f6-456c-8697-ca754996cd0d-kube-api-access-9sxnp\") pod \"b338c6b2-22f6-456c-8697-ca754996cd0d\" (UID: \"b338c6b2-22f6-456c-8697-ca754996cd0d\") " Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.647564 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njd6q\" (UniqueName: \"kubernetes.io/projected/fdd35241-336f-4314-b048-5a046957eadf-kube-api-access-njd6q\") pod \"fdd35241-336f-4314-b048-5a046957eadf\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.647642 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-config\") pod \"fdd35241-336f-4314-b048-5a046957eadf\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.648038 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-dns-svc\") pod \"fdd35241-336f-4314-b048-5a046957eadf\" (UID: \"fdd35241-336f-4314-b048-5a046957eadf\") " Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.648358 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-config" (OuterVolumeSpecName: "config") pod "fdd35241-336f-4314-b048-5a046957eadf" (UID: "fdd35241-336f-4314-b048-5a046957eadf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.648734 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.648839 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b338c6b2-22f6-456c-8697-ca754996cd0d-config" (OuterVolumeSpecName: "config") pod "b338c6b2-22f6-456c-8697-ca754996cd0d" (UID: "b338c6b2-22f6-456c-8697-ca754996cd0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.649750 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdd35241-336f-4314-b048-5a046957eadf" (UID: "fdd35241-336f-4314-b048-5a046957eadf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.656574 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b338c6b2-22f6-456c-8697-ca754996cd0d-kube-api-access-9sxnp" (OuterVolumeSpecName: "kube-api-access-9sxnp") pod "b338c6b2-22f6-456c-8697-ca754996cd0d" (UID: "b338c6b2-22f6-456c-8697-ca754996cd0d"). InnerVolumeSpecName "kube-api-access-9sxnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.656629 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd35241-336f-4314-b048-5a046957eadf-kube-api-access-njd6q" (OuterVolumeSpecName: "kube-api-access-njd6q") pod "fdd35241-336f-4314-b048-5a046957eadf" (UID: "fdd35241-336f-4314-b048-5a046957eadf"). InnerVolumeSpecName "kube-api-access-njd6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.668953 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.750035 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b338c6b2-22f6-456c-8697-ca754996cd0d-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.750078 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sxnp\" (UniqueName: \"kubernetes.io/projected/b338c6b2-22f6-456c-8697-ca754996cd0d-kube-api-access-9sxnp\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.750096 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njd6q\" (UniqueName: \"kubernetes.io/projected/fdd35241-336f-4314-b048-5a046957eadf-kube-api-access-njd6q\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.750110 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd35241-336f-4314-b048-5a046957eadf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.957183 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.975180 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-vz298"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.985412 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:36:14 crc kubenswrapper[4897]: I0228 13:36:14.997093 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 28 13:36:15 crc kubenswrapper[4897]: W0228 13:36:15.007446 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdcbdd69_5241_4875_aceb_401d47d6fad5.slice/crio-7af1c888e07e8a343df7bba4ee844368b971db787858587a9a0101040bf76234 WatchSource:0}: Error finding container 7af1c888e07e8a343df7bba4ee844368b971db787858587a9a0101040bf76234: Status 404 returned error can't find the container with id 7af1c888e07e8a343df7bba4ee844368b971db787858587a9a0101040bf76234 Feb 28 13:36:15 crc kubenswrapper[4897]: W0228 13:36:15.019387 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631372ff_9a4e_4110_9ff4_aad528049a06.slice/crio-5ef0e38f67ae009632a0f5ed5d477fad0500f0431e93d665abf36af40e9a8ca3 WatchSource:0}: Error finding container 5ef0e38f67ae009632a0f5ed5d477fad0500f0431e93d665abf36af40e9a8ca3: Status 404 returned error can't find the container with id 5ef0e38f67ae009632a0f5ed5d477fad0500f0431e93d665abf36af40e9a8ca3 Feb 28 13:36:15 crc kubenswrapper[4897]: W0228 13:36:15.021284 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ed1e6c8_c823_4fd1_ab0d_5460b6024cd6.slice/crio-17a241890ba78ab776ed7e52d9645bfb5c1ca9256d62946e6a83b55633398a72 WatchSource:0}: Error finding container 17a241890ba78ab776ed7e52d9645bfb5c1ca9256d62946e6a83b55633398a72: Status 404 returned error can't find the container with id 17a241890ba78ab776ed7e52d9645bfb5c1ca9256d62946e6a83b55633398a72 Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.055532 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d7f297ea-652d-47ae-9831-fad10c6127ad","Type":"ContainerStarted","Data":"c725ec9bf6f606d61d317d887ebb9060af8679d5ccc66eb018308d17ebabc0e1"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.059964 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.060809 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" event={"ID":"fdd35241-336f-4314-b048-5a046957eadf","Type":"ContainerDied","Data":"689c1b5a0002098fcbae6587f4e0a405d5fe7b87f0840637507aa0cd146c33a4"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.060898 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-vqtfc" Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.063865 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a792d6c-3a28-4775-87bf-b099ea550a00","Type":"ContainerStarted","Data":"b67c44951723863ac485ae3266e9f28aa96512781926061443d37a2024246083"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.066163 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"48885530-3df1-42cf-9c7f-2f86a21026a9","Type":"ContainerStarted","Data":"f8f1b34fc827fca19c2fb7046114beb39caf101a005540eb642bb2e78aad3627"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.068987 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"db99e06f-c263-4aef-b5c2-330eaed29fd4","Type":"ContainerStarted","Data":"591a73309f3dfb473a128a4b4ad1ae5a3f705e4cf1b3f36463f3de86d2117550"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.071407 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsdwb" event={"ID":"cd2fa5a5-caab-4d3d-8324-f6107d50f59f","Type":"ContainerStarted","Data":"995ac1459e7bc37a78d351ebdd1a3d1417b86bad9d653bb1083083549c9cbaec"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.081581 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" event={"ID":"b338c6b2-22f6-456c-8697-ca754996cd0d","Type":"ContainerDied","Data":"07147d418d3671234529eb5800e77d43b4718d027a67a68ae6100b45c392b902"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.081737 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-m7448" Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.083483 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-vz298" event={"ID":"bdcbdd69-5241-4875-aceb-401d47d6fad5","Type":"ContainerStarted","Data":"7af1c888e07e8a343df7bba4ee844368b971db787858587a9a0101040bf76234"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.088132 4897 generic.go:334] "Generic (PLEG): container finished" podID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" containerID="aecd186cf544efd6e74145e8644e0899cd90e64c5f7a1814c3397f40e6bf3157" exitCode=0 Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.088291 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" event={"ID":"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb","Type":"ContainerDied","Data":"aecd186cf544efd6e74145e8644e0899cd90e64c5f7a1814c3397f40e6bf3157"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.088337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" event={"ID":"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb","Type":"ContainerStarted","Data":"19d8c6d41d3f2254a291e4cd205eef1ed559428f54afc69d01cdd69f7d4a2abd"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.104587 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"631372ff-9a4e-4110-9ff4-aad528049a06","Type":"ContainerStarted","Data":"5ef0e38f67ae009632a0f5ed5d477fad0500f0431e93d665abf36af40e9a8ca3"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.117883 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-vqtfc"] Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.123500 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d","Type":"ContainerStarted","Data":"5d0cef52f94c4244d714a499e901170edc0b592fee597a27ccf9097d1078263d"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.136678 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-vqtfc"] Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.165634 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bf46d42-2d7e-410d-8a74-1ce12bb280b2","Type":"ContainerStarted","Data":"46ab728f52359146111b9eafd7e39ce4b85351f723695b66b4128f8d614e9490"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.182639 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" event={"ID":"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b","Type":"ContainerStarted","Data":"488a60779576cd01ac6884baae1de674651f0f5bf2089ac1b496442c30cb875d"} Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.187957 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ch9bl"] Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.194644 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-m7448"] Feb 28 13:36:15 crc kubenswrapper[4897]: I0228 13:36:15.199296 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-m7448"] Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.532101 4897 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 28 13:36:15 crc kubenswrapper[4897]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 28 13:36:15 crc kubenswrapper[4897]: > podSandboxID="19d8c6d41d3f2254a291e4cd205eef1ed559428f54afc69d01cdd69f7d4a2abd" Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.532539 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:36:15 crc kubenswrapper[4897]: container &Container{Name:dnsmasq-dns,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n78lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6f49bcf4c9-fvv8t_openstack(5c2f7b4a-52e1-4c07-8783-1ab96747f5bb): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 28 13:36:15 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.533877 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" podUID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.684331 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.684518 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:36:15 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:36:15 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqr7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538096-ws9qt_openshift-infra(6e94c0b2-21a6-496c-8188-dfcaf0d66b2b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:36:15 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.685701 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.695466 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256=a5a5f6e01185078c4bef79d1b4d9fd021ffbe235c231cc9395fc14b8367eca67/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.695640 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256=a5a5f6e01185078c4bef79d1b4d9fd021ffbe235c231cc9395fc14b8367eca67/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:36:15 crc kubenswrapper[4897]: E0228 13:36:15.697690 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256=a5a5f6e01185078c4bef79d1b4d9fd021ffbe235c231cc9395fc14b8367eca67/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:36:16 crc kubenswrapper[4897]: I0228 13:36:16.190565 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"48d78132-b30d-4c29-8137-7af1597f8cc6","Type":"ContainerStarted","Data":"d93c3ef57b3cb4b4bde0d15938b65df94b7caba104ed0480efcdf842645d7601"} Feb 28 13:36:16 crc kubenswrapper[4897]: I0228 13:36:16.193264 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerStarted","Data":"17a241890ba78ab776ed7e52d9645bfb5c1ca9256d62946e6a83b55633398a72"} Feb 28 13:36:16 crc kubenswrapper[4897]: I0228 13:36:16.194649 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ch9bl" event={"ID":"995bc563-52dc-4755-b43f-96a2746d8bce","Type":"ContainerStarted","Data":"138f4f9b2926397069a980ed4490eddd7267daeb60d28540cb6ef1c4cfb116f2"} Feb 28 13:36:16 crc kubenswrapper[4897]: E0228 13:36:16.194675 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:36:16 crc kubenswrapper[4897]: I0228 13:36:16.197626 4897 generic.go:334] "Generic (PLEG): container finished" podID="bdcbdd69-5241-4875-aceb-401d47d6fad5" containerID="4ef5f28aaccb2427f711815739bf78fd0d7c9a4d57c0c6b30bfa0a772b33b8c4" exitCode=0 Feb 28 13:36:16 crc kubenswrapper[4897]: I0228 13:36:16.197688 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-vz298" event={"ID":"bdcbdd69-5241-4875-aceb-401d47d6fad5","Type":"ContainerDied","Data":"4ef5f28aaccb2427f711815739bf78fd0d7c9a4d57c0c6b30bfa0a772b33b8c4"} Feb 28 13:36:16 crc kubenswrapper[4897]: I0228 13:36:16.199615 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f032f5e9-4992-4586-bd47-0c3da76ecf40","Type":"ContainerStarted","Data":"f2bd27268227c41a503e7e05d624add799a1e700ddeabc66102f4fe4e578a71e"} Feb 28 13:36:16 crc kubenswrapper[4897]: E0228 13:36:16.430089 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" Feb 28 13:36:16 crc kubenswrapper[4897]: I0228 13:36:16.471199 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b338c6b2-22f6-456c-8697-ca754996cd0d" path="/var/lib/kubelet/pods/b338c6b2-22f6-456c-8697-ca754996cd0d/volumes" Feb 28 13:36:16 crc kubenswrapper[4897]: I0228 13:36:16.471594 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd35241-336f-4314-b048-5a046957eadf" path="/var/lib/kubelet/pods/fdd35241-336f-4314-b048-5a046957eadf/volumes" Feb 28 13:36:17 crc kubenswrapper[4897]: E0228 13:36:17.219526 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.479012 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.479602 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.479727 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.80:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvkjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(db99e06f-c263-4aef-b5c2-330eaed29fd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.481526 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.513464 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.513542 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.513716 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.80:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h5d4h547h5h654h54ch5f8h8bh7h5cch5ddhf5h7dh577h9dh8bh67fh596hf8hb6h5c5hffh8bhf9h5bfh4h677hf4h79h67fh5fdh66fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89q4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-ch9bl_openstack(995bc563-52dc-4755-b43f-96a2746d8bce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.514981 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-ch9bl" podUID="995bc563-52dc-4755-b43f-96a2746d8bce" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.543205 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.543268 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.543480 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.80:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsmtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(d7f297ea-652d-47ae-9831-fad10c6127ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.544747 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="d7f297ea-652d-47ae-9831-fad10c6127ad" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.976716 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.977195 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.977425 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.80:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h5d4h547h5h654h54ch5f8h8bh7h5cch5ddhf5h7dh577h9dh8bh67fh596hf8hb6h5c5hffh8bhf9h5bfh4h677hf4h79h67fh5fdh66fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmltt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-jsdwb_openstack(cd2fa5a5-caab-4d3d-8324-f6107d50f59f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:36:27 crc kubenswrapper[4897]: E0228 13:36:27.978716 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-jsdwb" podUID="cd2fa5a5-caab-4d3d-8324-f6107d50f59f" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.198704 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.198774 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.198899 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.80:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n657h5d7hdh576h68h8h586h5ch596h7ch568h599hdh6h586h5dfh66ch67ch96h67fh65dh58h698h565h546h684hc9h579hch664h7fh579q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nmkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(03ffdd06-e63d-4a43-96f0-92e2d0e3a89d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.313067 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="d7f297ea-652d-47ae-9831-fad10c6127ad" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.313299 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.313638 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest\\\"\"" pod="openstack/ovn-controller-jsdwb" podUID="cd2fa5a5-caab-4d3d-8324-f6107d50f59f" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.316966 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-ch9bl" podUID="995bc563-52dc-4755-b43f-96a2746d8bce" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.851985 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.852065 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Feb 28 13:36:28 crc kubenswrapper[4897]: E0228 13:36:28.852384 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.80:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc9h94h57dh5f8h87h644hfbh7hc8h75hd6h554h647hdh9bh8fh584h58fh575hf9h657h5b5hbbh8dhc7hbdh7dh599h54bh575hc6h677q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fwzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(48d78132-b30d-4c29-8137-7af1597f8cc6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.676633 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-598bf"] Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.687082 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-598bf"] Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.690519 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.692461 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.816048 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-combined-ca-bundle\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.816109 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-config\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.816133 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-ovs-rundir\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.816149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-ovn-rundir\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.816175 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.816246 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47r5v\" (UniqueName: \"kubernetes.io/projected/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-kube-api-access-47r5v\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.822230 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-fvv8t"] Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.846657 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-574484c5f-2mwfp"] Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.876593 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.879398 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.884796 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574484c5f-2mwfp"] Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917582 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47r5v\" (UniqueName: \"kubernetes.io/projected/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-kube-api-access-47r5v\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917662 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-dns-svc\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917697 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-combined-ca-bundle\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917728 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-config\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917759 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-ovs-rundir\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917780 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-ovn-rundir\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917810 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917833 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-ovsdbserver-sb\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917869 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-config\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.917896 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnt5b\" (UniqueName: \"kubernetes.io/projected/175662b3-1ff4-45ef-b37b-3c0622eac202-kube-api-access-xnt5b\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.924256 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-combined-ca-bundle\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.924794 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-config\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.925074 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-ovs-rundir\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.925124 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-ovn-rundir\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.928178 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.935762 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47r5v\" (UniqueName: \"kubernetes.io/projected/5ab588f4-9fad-44d6-a7e2-2e99b19ef285-kube-api-access-47r5v\") pod \"ovn-controller-metrics-598bf\" (UID: \"5ab588f4-9fad-44d6-a7e2-2e99b19ef285\") " pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.953349 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-vz298"] Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.983047 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-2bj5z"] Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.985917 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.990571 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 28 13:36:29 crc kubenswrapper[4897]: I0228 13:36:29.991120 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-2bj5z"] Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019595 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-nb\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019655 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-sb\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019683 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-dns-svc\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019739 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-ovsdbserver-sb\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019766 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9tv\" (UniqueName: \"kubernetes.io/projected/8fc42472-3941-42bf-bab4-ca05277cb6cf-kube-api-access-7h9tv\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019785 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-config\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019808 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnt5b\" (UniqueName: \"kubernetes.io/projected/175662b3-1ff4-45ef-b37b-3c0622eac202-kube-api-access-xnt5b\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019823 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-dns-svc\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.019853 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-config\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.020465 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-598bf" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.020771 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-config\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.020862 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-ovsdbserver-sb\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.022062 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-dns-svc\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.045638 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnt5b\" (UniqueName: \"kubernetes.io/projected/175662b3-1ff4-45ef-b37b-3c0622eac202-kube-api-access-xnt5b\") pod \"dnsmasq-dns-574484c5f-2mwfp\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.121439 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-nb\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.121496 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-sb\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.121567 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h9tv\" (UniqueName: \"kubernetes.io/projected/8fc42472-3941-42bf-bab4-ca05277cb6cf-kube-api-access-7h9tv\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.121596 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-dns-svc\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.121629 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-config\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.122481 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-config\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.122733 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-sb\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.122954 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-dns-svc\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.123600 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-nb\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.152611 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h9tv\" (UniqueName: \"kubernetes.io/projected/8fc42472-3941-42bf-bab4-ca05277cb6cf-kube-api-access-7h9tv\") pod \"dnsmasq-dns-599f5467c5-2bj5z\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.203027 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:30 crc kubenswrapper[4897]: E0228 13:36:30.269183 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 28 13:36:30 crc kubenswrapper[4897]: E0228 13:36:30.269269 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 28 13:36:30 crc kubenswrapper[4897]: E0228 13:36:30.269413 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8sf7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(631372ff-9a4e-4110-9ff4-aad528049a06): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 13:36:30 crc kubenswrapper[4897]: E0228 13:36:30.270596 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="631372ff-9a4e-4110-9ff4-aad528049a06" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.341154 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:30 crc kubenswrapper[4897]: E0228 13:36:30.342104 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="631372ff-9a4e-4110-9ff4-aad528049a06" Feb 28 13:36:30 crc kubenswrapper[4897]: I0228 13:36:30.905781 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574484c5f-2mwfp"] Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.028555 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-598bf"] Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.106708 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-2bj5z"] Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.350360 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" event={"ID":"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb","Type":"ContainerStarted","Data":"e8f390af0b4a649883c03acfd827615f71e5e6fe53b576890a5659e2d9d5e194"} Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.350486 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.350477 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" podUID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" containerName="dnsmasq-dns" containerID="cri-o://e8f390af0b4a649883c03acfd827615f71e5e6fe53b576890a5659e2d9d5e194" gracePeriod=10 Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.355457 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-vz298" event={"ID":"bdcbdd69-5241-4875-aceb-401d47d6fad5","Type":"ContainerStarted","Data":"ff36feb6fdf76756041e04f06063922aebfe7c3297269c5897127846c5274cd1"} Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.355600 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7587d7df99-vz298" podUID="bdcbdd69-5241-4875-aceb-401d47d6fad5" containerName="dnsmasq-dns" containerID="cri-o://ff36feb6fdf76756041e04f06063922aebfe7c3297269c5897127846c5274cd1" gracePeriod=10 Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.355678 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.358982 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f032f5e9-4992-4586-bd47-0c3da76ecf40","Type":"ContainerStarted","Data":"d162b55ac2af9f56fb207ee117aeb66ba28b4522d892d79334d3347d336cbc7a"} Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.367755 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.375512 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" podStartSLOduration=36.142524042 podStartE2EDuration="36.37548298s" podCreationTimestamp="2026-02-28 13:35:55 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.267132747 +0000 UTC m=+1188.509453414" lastFinishedPulling="2026-02-28 13:36:14.500091695 +0000 UTC m=+1188.742412352" observedRunningTime="2026-02-28 13:36:31.367800242 +0000 UTC m=+1205.610120949" watchObservedRunningTime="2026-02-28 13:36:31.37548298 +0000 UTC m=+1205.617803647" Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.403105 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7587d7df99-vz298" podStartSLOduration=36.403088854 podStartE2EDuration="36.403088854s" podCreationTimestamp="2026-02-28 13:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:36:31.401910651 +0000 UTC m=+1205.644231308" watchObservedRunningTime="2026-02-28 13:36:31.403088854 +0000 UTC m=+1205.645409511" Feb 28 13:36:31 crc kubenswrapper[4897]: I0228 13:36:31.408423 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.477309956 podStartE2EDuration="31.408413466s" podCreationTimestamp="2026-02-28 13:36:00 +0000 UTC" firstStartedPulling="2026-02-28 13:36:15.045344913 +0000 UTC m=+1189.287665570" lastFinishedPulling="2026-02-28 13:36:28.976448413 +0000 UTC m=+1203.218769080" observedRunningTime="2026-02-28 13:36:31.38568282 +0000 UTC m=+1205.628003477" watchObservedRunningTime="2026-02-28 13:36:31.408413466 +0000 UTC m=+1205.650734123" Feb 28 13:36:32 crc kubenswrapper[4897]: W0228 13:36:32.074942 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod175662b3_1ff4_45ef_b37b_3c0622eac202.slice/crio-1aa3913f60ceea6d6c07e240e21a322e6d27c4f4a946286726219fe7f660eb66 WatchSource:0}: Error finding container 1aa3913f60ceea6d6c07e240e21a322e6d27c4f4a946286726219fe7f660eb66: Status 404 returned error can't find the container with id 1aa3913f60ceea6d6c07e240e21a322e6d27c4f4a946286726219fe7f660eb66 Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.395845 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" event={"ID":"8fc42472-3941-42bf-bab4-ca05277cb6cf","Type":"ContainerStarted","Data":"1ec4005d1d6dbca98542431c26bf2cf259e6f60ec74b650833a013d04066f2c7"} Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.397932 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bf46d42-2d7e-410d-8a74-1ce12bb280b2","Type":"ContainerStarted","Data":"4e25f72a41edbd1b43773b05d08492b582421f5b717fe5a90ecfa8d2cb7b0d38"} Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.401217 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" event={"ID":"175662b3-1ff4-45ef-b37b-3c0622eac202","Type":"ContainerStarted","Data":"1aa3913f60ceea6d6c07e240e21a322e6d27c4f4a946286726219fe7f660eb66"} Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.403904 4897 generic.go:334] "Generic (PLEG): container finished" podID="bdcbdd69-5241-4875-aceb-401d47d6fad5" containerID="ff36feb6fdf76756041e04f06063922aebfe7c3297269c5897127846c5274cd1" exitCode=0 Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.403964 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-vz298" event={"ID":"bdcbdd69-5241-4875-aceb-401d47d6fad5","Type":"ContainerDied","Data":"ff36feb6fdf76756041e04f06063922aebfe7c3297269c5897127846c5274cd1"} Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.405636 4897 generic.go:334] "Generic (PLEG): container finished" podID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" containerID="e8f390af0b4a649883c03acfd827615f71e5e6fe53b576890a5659e2d9d5e194" exitCode=0 Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.405687 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" event={"ID":"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb","Type":"ContainerDied","Data":"e8f390af0b4a649883c03acfd827615f71e5e6fe53b576890a5659e2d9d5e194"} Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.407471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a792d6c-3a28-4775-87bf-b099ea550a00","Type":"ContainerStarted","Data":"fe8050bd404884f66eddbd6adbe7f7bd94e5332f6f5879701dcd60a3e7709119"} Feb 28 13:36:32 crc kubenswrapper[4897]: I0228 13:36:32.411978 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"48885530-3df1-42cf-9c7f-2f86a21026a9","Type":"ContainerStarted","Data":"bbc6fa8ee8d3ac80c3c92093f967d5892e5318137535da47510c94b737137581"} Feb 28 13:36:32 crc kubenswrapper[4897]: W0228 13:36:32.752738 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ab588f4_9fad_44d6_a7e2_2e99b19ef285.slice/crio-f53a0794229910f48d990aed685e4c2e5e2ae9e22ae32ef61cb1260786f47fe0 WatchSource:0}: Error finding container f53a0794229910f48d990aed685e4c2e5e2ae9e22ae32ef61cb1260786f47fe0: Status 404 returned error can't find the container with id f53a0794229910f48d990aed685e4c2e5e2ae9e22ae32ef61cb1260786f47fe0 Feb 28 13:36:33 crc kubenswrapper[4897]: E0228 13:36:33.035096 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:36:33 crc kubenswrapper[4897]: E0228 13:36:33.035515 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:36:33 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:36:33 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqr7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538096-ws9qt_openshift-infra(6e94c0b2-21a6-496c-8188-dfcaf0d66b2b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:36:33 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:36:33 crc kubenswrapper[4897]: E0228 13:36:33.037032 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.160245 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.166612 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.325301 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-config\") pod \"bdcbdd69-5241-4875-aceb-401d47d6fad5\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.325658 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw5wn\" (UniqueName: \"kubernetes.io/projected/bdcbdd69-5241-4875-aceb-401d47d6fad5-kube-api-access-rw5wn\") pod \"bdcbdd69-5241-4875-aceb-401d47d6fad5\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.325725 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-config\") pod \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.326000 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n78lj\" (UniqueName: \"kubernetes.io/projected/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-kube-api-access-n78lj\") pod \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.326099 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-dns-svc\") pod \"bdcbdd69-5241-4875-aceb-401d47d6fad5\" (UID: \"bdcbdd69-5241-4875-aceb-401d47d6fad5\") " Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.326153 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-dns-svc\") pod \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\" (UID: \"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb\") " Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.347694 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdcbdd69-5241-4875-aceb-401d47d6fad5-kube-api-access-rw5wn" (OuterVolumeSpecName: "kube-api-access-rw5wn") pod "bdcbdd69-5241-4875-aceb-401d47d6fad5" (UID: "bdcbdd69-5241-4875-aceb-401d47d6fad5"). InnerVolumeSpecName "kube-api-access-rw5wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.348444 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-kube-api-access-n78lj" (OuterVolumeSpecName: "kube-api-access-n78lj") pod "5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" (UID: "5c2f7b4a-52e1-4c07-8783-1ab96747f5bb"). InnerVolumeSpecName "kube-api-access-n78lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.373138 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.373221 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.373288 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.374253 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c1430618bfc0c64d7fc6435ca448e45cbed910b3af28fa0f1da0886835a239f"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.374362 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://9c1430618bfc0c64d7fc6435ca448e45cbed910b3af28fa0f1da0886835a239f" gracePeriod=600 Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.421200 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" event={"ID":"5c2f7b4a-52e1-4c07-8783-1ab96747f5bb","Type":"ContainerDied","Data":"19d8c6d41d3f2254a291e4cd205eef1ed559428f54afc69d01cdd69f7d4a2abd"} Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.421220 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f49bcf4c9-fvv8t" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.421258 4897 scope.go:117] "RemoveContainer" containerID="e8f390af0b4a649883c03acfd827615f71e5e6fe53b576890a5659e2d9d5e194" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.427612 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw5wn\" (UniqueName: \"kubernetes.io/projected/bdcbdd69-5241-4875-aceb-401d47d6fad5-kube-api-access-rw5wn\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.427637 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n78lj\" (UniqueName: \"kubernetes.io/projected/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-kube-api-access-n78lj\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.430126 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-598bf" event={"ID":"5ab588f4-9fad-44d6-a7e2-2e99b19ef285","Type":"ContainerStarted","Data":"f53a0794229910f48d990aed685e4c2e5e2ae9e22ae32ef61cb1260786f47fe0"} Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.432409 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-vz298" event={"ID":"bdcbdd69-5241-4875-aceb-401d47d6fad5","Type":"ContainerDied","Data":"7af1c888e07e8a343df7bba4ee844368b971db787858587a9a0101040bf76234"} Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.432630 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d7df99-vz298" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.441688 4897 scope.go:117] "RemoveContainer" containerID="aecd186cf544efd6e74145e8644e0899cd90e64c5f7a1814c3397f40e6bf3157" Feb 28 13:36:33 crc kubenswrapper[4897]: E0228 13:36:33.489665 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="48d78132-b30d-4c29-8137-7af1597f8cc6" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.513440 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" (UID: "5c2f7b4a-52e1-4c07-8783-1ab96747f5bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.518618 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdcbdd69-5241-4875-aceb-401d47d6fad5" (UID: "bdcbdd69-5241-4875-aceb-401d47d6fad5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.518759 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-config" (OuterVolumeSpecName: "config") pod "5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" (UID: "5c2f7b4a-52e1-4c07-8783-1ab96747f5bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.520652 4897 scope.go:117] "RemoveContainer" containerID="ff36feb6fdf76756041e04f06063922aebfe7c3297269c5897127846c5274cd1" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.526719 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-config" (OuterVolumeSpecName: "config") pod "bdcbdd69-5241-4875-aceb-401d47d6fad5" (UID: "bdcbdd69-5241-4875-aceb-401d47d6fad5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.532798 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.532912 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.532975 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.533515 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcbdd69-5241-4875-aceb-401d47d6fad5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:33 crc kubenswrapper[4897]: E0228 13:36:33.537569 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="03ffdd06-e63d-4a43-96f0-92e2d0e3a89d" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.575523 4897 scope.go:117] "RemoveContainer" containerID="4ef5f28aaccb2427f711815739bf78fd0d7c9a4d57c0c6b30bfa0a772b33b8c4" Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.753271 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-fvv8t"] Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.759467 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-fvv8t"] Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.774748 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-vz298"] Feb 28 13:36:33 crc kubenswrapper[4897]: I0228 13:36:33.780668 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-vz298"] Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.440795 4897 generic.go:334] "Generic (PLEG): container finished" podID="8fc42472-3941-42bf-bab4-ca05277cb6cf" containerID="e9876136638b45115c2b65012255b207b4c851ff5fc56e4393637fa75ed81366" exitCode=0 Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.440858 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" event={"ID":"8fc42472-3941-42bf-bab4-ca05277cb6cf","Type":"ContainerDied","Data":"e9876136638b45115c2b65012255b207b4c851ff5fc56e4393637fa75ed81366"} Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.443455 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-598bf" event={"ID":"5ab588f4-9fad-44d6-a7e2-2e99b19ef285","Type":"ContainerStarted","Data":"2c2096cfdb9d42245b193ef25c0bb11c3dfa2b41103e79451173ec8af024bc61"} Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.444947 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d","Type":"ContainerStarted","Data":"e11bc0e81dfe5996695e68b968bd8db71fb8fdeb56e2fb33686d522e524b90ee"} Feb 28 13:36:34 crc kubenswrapper[4897]: E0228 13:36:34.446049 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="03ffdd06-e63d-4a43-96f0-92e2d0e3a89d" Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.447738 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="9c1430618bfc0c64d7fc6435ca448e45cbed910b3af28fa0f1da0886835a239f" exitCode=0 Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.447798 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"9c1430618bfc0c64d7fc6435ca448e45cbed910b3af28fa0f1da0886835a239f"} Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.447834 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"2251fe7bbe6b22484b56b41016e482aae198972b32b2a8de419f213131379efa"} Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.447852 4897 scope.go:117] "RemoveContainer" containerID="ba683f1199708260a29f4bdafd88105c75a046d1fe9faa93c033d9e42ddff022" Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.450026 4897 generic.go:334] "Generic (PLEG): container finished" podID="175662b3-1ff4-45ef-b37b-3c0622eac202" containerID="14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f" exitCode=0 Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.450078 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" event={"ID":"175662b3-1ff4-45ef-b37b-3c0622eac202","Type":"ContainerDied","Data":"14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f"} Feb 28 13:36:34 crc kubenswrapper[4897]: E0228 13:36:34.461536 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="48d78132-b30d-4c29-8137-7af1597f8cc6" Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.529576 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" path="/var/lib/kubelet/pods/5c2f7b4a-52e1-4c07-8783-1ab96747f5bb/volumes" Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.530739 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdcbdd69-5241-4875-aceb-401d47d6fad5" path="/var/lib/kubelet/pods/bdcbdd69-5241-4875-aceb-401d47d6fad5/volumes" Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.531666 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"48d78132-b30d-4c29-8137-7af1597f8cc6","Type":"ContainerStarted","Data":"f4ea7ab5edc9018968cbc2ef9e03d937d0763544a0f7a4057ab6246d99d46100"} Feb 28 13:36:34 crc kubenswrapper[4897]: I0228 13:36:34.543917 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-598bf" podStartSLOduration=5.081524568 podStartE2EDuration="5.543862792s" podCreationTimestamp="2026-02-28 13:36:29 +0000 UTC" firstStartedPulling="2026-02-28 13:36:32.762231873 +0000 UTC m=+1207.004552560" lastFinishedPulling="2026-02-28 13:36:33.224570117 +0000 UTC m=+1207.466890784" observedRunningTime="2026-02-28 13:36:34.493385438 +0000 UTC m=+1208.735706125" watchObservedRunningTime="2026-02-28 13:36:34.543862792 +0000 UTC m=+1208.786183449" Feb 28 13:36:35 crc kubenswrapper[4897]: I0228 13:36:35.469763 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" event={"ID":"175662b3-1ff4-45ef-b37b-3c0622eac202","Type":"ContainerStarted","Data":"10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690"} Feb 28 13:36:35 crc kubenswrapper[4897]: I0228 13:36:35.470400 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:35 crc kubenswrapper[4897]: I0228 13:36:35.472116 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" event={"ID":"8fc42472-3941-42bf-bab4-ca05277cb6cf","Type":"ContainerStarted","Data":"9dee223eefb6b4d79a339b608d6a8789533eeff1e8627be46d2373a28f22b7b8"} Feb 28 13:36:35 crc kubenswrapper[4897]: I0228 13:36:35.472569 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:35 crc kubenswrapper[4897]: E0228 13:36:35.474031 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="48d78132-b30d-4c29-8137-7af1597f8cc6" Feb 28 13:36:35 crc kubenswrapper[4897]: E0228 13:36:35.474413 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="03ffdd06-e63d-4a43-96f0-92e2d0e3a89d" Feb 28 13:36:35 crc kubenswrapper[4897]: I0228 13:36:35.500689 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" podStartSLOduration=6.500671812 podStartE2EDuration="6.500671812s" podCreationTimestamp="2026-02-28 13:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:36:35.492240072 +0000 UTC m=+1209.734560729" watchObservedRunningTime="2026-02-28 13:36:35.500671812 +0000 UTC m=+1209.742992469" Feb 28 13:36:35 crc kubenswrapper[4897]: I0228 13:36:35.516162 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" podStartSLOduration=6.516141481 podStartE2EDuration="6.516141481s" podCreationTimestamp="2026-02-28 13:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:36:35.514626428 +0000 UTC m=+1209.756947125" watchObservedRunningTime="2026-02-28 13:36:35.516141481 +0000 UTC m=+1209.758462138" Feb 28 13:36:35 crc kubenswrapper[4897]: I0228 13:36:35.658568 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 28 13:36:40 crc kubenswrapper[4897]: I0228 13:36:40.205601 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:40 crc kubenswrapper[4897]: I0228 13:36:40.342464 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:40 crc kubenswrapper[4897]: I0228 13:36:40.405425 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574484c5f-2mwfp"] Feb 28 13:36:40 crc kubenswrapper[4897]: I0228 13:36:40.512773 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" podUID="175662b3-1ff4-45ef-b37b-3c0622eac202" containerName="dnsmasq-dns" containerID="cri-o://10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690" gracePeriod=10 Feb 28 13:36:40 crc kubenswrapper[4897]: I0228 13:36:40.926836 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.064945 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-dns-svc\") pod \"175662b3-1ff4-45ef-b37b-3c0622eac202\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.065026 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnt5b\" (UniqueName: \"kubernetes.io/projected/175662b3-1ff4-45ef-b37b-3c0622eac202-kube-api-access-xnt5b\") pod \"175662b3-1ff4-45ef-b37b-3c0622eac202\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.065111 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-ovsdbserver-sb\") pod \"175662b3-1ff4-45ef-b37b-3c0622eac202\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.065141 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-config\") pod \"175662b3-1ff4-45ef-b37b-3c0622eac202\" (UID: \"175662b3-1ff4-45ef-b37b-3c0622eac202\") " Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.070500 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175662b3-1ff4-45ef-b37b-3c0622eac202-kube-api-access-xnt5b" (OuterVolumeSpecName: "kube-api-access-xnt5b") pod "175662b3-1ff4-45ef-b37b-3c0622eac202" (UID: "175662b3-1ff4-45ef-b37b-3c0622eac202"). InnerVolumeSpecName "kube-api-access-xnt5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.100497 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "175662b3-1ff4-45ef-b37b-3c0622eac202" (UID: "175662b3-1ff4-45ef-b37b-3c0622eac202"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.103842 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "175662b3-1ff4-45ef-b37b-3c0622eac202" (UID: "175662b3-1ff4-45ef-b37b-3c0622eac202"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.110061 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-config" (OuterVolumeSpecName: "config") pod "175662b3-1ff4-45ef-b37b-3c0622eac202" (UID: "175662b3-1ff4-45ef-b37b-3c0622eac202"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.167097 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.167140 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnt5b\" (UniqueName: \"kubernetes.io/projected/175662b3-1ff4-45ef-b37b-3c0622eac202-kube-api-access-xnt5b\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.167154 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.167165 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175662b3-1ff4-45ef-b37b-3c0622eac202-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.543505 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsdwb" event={"ID":"cd2fa5a5-caab-4d3d-8324-f6107d50f59f","Type":"ContainerStarted","Data":"9d9d16748a818586f8eec985b7dad1b2966e2d4fda29efc09d7b83b54c9740bd"} Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.543689 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jsdwb" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.548523 4897 generic.go:334] "Generic (PLEG): container finished" podID="175662b3-1ff4-45ef-b37b-3c0622eac202" containerID="10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690" exitCode=0 Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.548566 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" event={"ID":"175662b3-1ff4-45ef-b37b-3c0622eac202","Type":"ContainerDied","Data":"10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690"} Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.548598 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" event={"ID":"175662b3-1ff4-45ef-b37b-3c0622eac202","Type":"ContainerDied","Data":"1aa3913f60ceea6d6c07e240e21a322e6d27c4f4a946286726219fe7f660eb66"} Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.548618 4897 scope.go:117] "RemoveContainer" containerID="10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.548656 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574484c5f-2mwfp" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.567262 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jsdwb" podStartSLOduration=9.663501022 podStartE2EDuration="35.56724456s" podCreationTimestamp="2026-02-28 13:36:06 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.619700632 +0000 UTC m=+1188.862021289" lastFinishedPulling="2026-02-28 13:36:40.52344416 +0000 UTC m=+1214.765764827" observedRunningTime="2026-02-28 13:36:41.562762842 +0000 UTC m=+1215.805083519" watchObservedRunningTime="2026-02-28 13:36:41.56724456 +0000 UTC m=+1215.809565227" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.603881 4897 scope.go:117] "RemoveContainer" containerID="14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.611455 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574484c5f-2mwfp"] Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.620181 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-574484c5f-2mwfp"] Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.641711 4897 scope.go:117] "RemoveContainer" containerID="10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690" Feb 28 13:36:41 crc kubenswrapper[4897]: E0228 13:36:41.642534 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690\": container with ID starting with 10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690 not found: ID does not exist" containerID="10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.642612 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690"} err="failed to get container status \"10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690\": rpc error: code = NotFound desc = could not find container \"10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690\": container with ID starting with 10c5196cf13a45eedcf07361514b1f99ed2835e057d648512c7b22b49d5c0690 not found: ID does not exist" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.642673 4897 scope.go:117] "RemoveContainer" containerID="14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f" Feb 28 13:36:41 crc kubenswrapper[4897]: E0228 13:36:41.643055 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f\": container with ID starting with 14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f not found: ID does not exist" containerID="14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f" Feb 28 13:36:41 crc kubenswrapper[4897]: I0228 13:36:41.643091 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f"} err="failed to get container status \"14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f\": rpc error: code = NotFound desc = could not find container \"14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f\": container with ID starting with 14ff16a26076f1d3c3c67745e18e169fa8525544c41ef283c58a076225486a2f not found: ID does not exist" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.476357 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175662b3-1ff4-45ef-b37b-3c0622eac202" path="/var/lib/kubelet/pods/175662b3-1ff4-45ef-b37b-3c0622eac202/volumes" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.563773 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"631372ff-9a4e-4110-9ff4-aad528049a06","Type":"ContainerStarted","Data":"a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b"} Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.564468 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.586792 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.741282932 podStartE2EDuration="40.586775051s" podCreationTimestamp="2026-02-28 13:36:02 +0000 UTC" firstStartedPulling="2026-02-28 13:36:15.023442291 +0000 UTC m=+1189.265762948" lastFinishedPulling="2026-02-28 13:36:41.86893438 +0000 UTC m=+1216.111255067" observedRunningTime="2026-02-28 13:36:42.582450358 +0000 UTC m=+1216.824771015" watchObservedRunningTime="2026-02-28 13:36:42.586775051 +0000 UTC m=+1216.829095708" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.798997 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c566df67-95sj7"] Feb 28 13:36:42 crc kubenswrapper[4897]: E0228 13:36:42.800163 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800179 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: E0228 13:36:42.800216 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175662b3-1ff4-45ef-b37b-3c0622eac202" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800226 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="175662b3-1ff4-45ef-b37b-3c0622eac202" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: E0228 13:36:42.800256 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcbdd69-5241-4875-aceb-401d47d6fad5" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800264 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcbdd69-5241-4875-aceb-401d47d6fad5" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: E0228 13:36:42.800320 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175662b3-1ff4-45ef-b37b-3c0622eac202" containerName="init" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800328 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="175662b3-1ff4-45ef-b37b-3c0622eac202" containerName="init" Feb 28 13:36:42 crc kubenswrapper[4897]: E0228 13:36:42.800357 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" containerName="init" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800364 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" containerName="init" Feb 28 13:36:42 crc kubenswrapper[4897]: E0228 13:36:42.800379 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcbdd69-5241-4875-aceb-401d47d6fad5" containerName="init" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800386 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcbdd69-5241-4875-aceb-401d47d6fad5" containerName="init" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800592 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2f7b4a-52e1-4c07-8783-1ab96747f5bb" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800619 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="175662b3-1ff4-45ef-b37b-3c0622eac202" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.800645 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdcbdd69-5241-4875-aceb-401d47d6fad5" containerName="dnsmasq-dns" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.801745 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.840789 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-95sj7"] Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.908596 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq74v\" (UniqueName: \"kubernetes.io/projected/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-kube-api-access-bq74v\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.908653 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.908705 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.908729 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-dns-svc\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:42 crc kubenswrapper[4897]: I0228 13:36:42.908773 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-config\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.010553 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-dns-svc\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.010636 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-config\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.010690 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq74v\" (UniqueName: \"kubernetes.io/projected/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-kube-api-access-bq74v\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.010736 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.010797 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.012198 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.012210 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-config\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.012653 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.013011 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-dns-svc\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.050757 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq74v\" (UniqueName: \"kubernetes.io/projected/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-kube-api-access-bq74v\") pod \"dnsmasq-dns-75c566df67-95sj7\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.160985 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.573189 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d7f297ea-652d-47ae-9831-fad10c6127ad","Type":"ContainerStarted","Data":"5282a74ea6c5f9d4361fd6e9883b6c4446f1f5c618ee23ada2ec4b213b322d48"} Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.574653 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ch9bl" event={"ID":"995bc563-52dc-4755-b43f-96a2746d8bce","Type":"ContainerStarted","Data":"935b3c61f31ffdae58c116ebe75d4929e50f923fbd4e7ff29f14eb9da24b38c2"} Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.678738 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-95sj7"] Feb 28 13:36:43 crc kubenswrapper[4897]: W0228 13:36:43.710029 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b08a755_562e_41ee_9591_eb9cb3fcb3c2.slice/crio-68e32f883d3b6ce97d9aced65239f92e55762729c8eeff6d1463575648b0bf9e WatchSource:0}: Error finding container 68e32f883d3b6ce97d9aced65239f92e55762729c8eeff6d1463575648b0bf9e: Status 404 returned error can't find the container with id 68e32f883d3b6ce97d9aced65239f92e55762729c8eeff6d1463575648b0bf9e Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.956375 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.961432 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.963805 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.964117 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.964389 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.964428 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6wvb8" Feb 28 13:36:43 crc kubenswrapper[4897]: I0228 13:36:43.979901 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.027624 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e07793a7-3e98-4a8d-bfb6-3c630f07d391-cache\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.027679 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e07793a7-3e98-4a8d-bfb6-3c630f07d391-lock\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.027875 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfnr\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-kube-api-access-xnfnr\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.027975 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07793a7-3e98-4a8d-bfb6-3c630f07d391-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.028187 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.028349 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.129659 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e07793a7-3e98-4a8d-bfb6-3c630f07d391-cache\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.129709 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e07793a7-3e98-4a8d-bfb6-3c630f07d391-lock\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.129735 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfnr\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-kube-api-access-xnfnr\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.129754 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07793a7-3e98-4a8d-bfb6-3c630f07d391-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.129797 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.129830 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: E0228 13:36:44.129948 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 13:36:44 crc kubenswrapper[4897]: E0228 13:36:44.129961 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 13:36:44 crc kubenswrapper[4897]: E0228 13:36:44.130005 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift podName:e07793a7-3e98-4a8d-bfb6-3c630f07d391 nodeName:}" failed. No retries permitted until 2026-02-28 13:36:44.629990178 +0000 UTC m=+1218.872310835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift") pod "swift-storage-0" (UID: "e07793a7-3e98-4a8d-bfb6-3c630f07d391") : configmap "swift-ring-files" not found Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.130214 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.130348 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e07793a7-3e98-4a8d-bfb6-3c630f07d391-cache\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.130586 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e07793a7-3e98-4a8d-bfb6-3c630f07d391-lock\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.134954 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07793a7-3e98-4a8d-bfb6-3c630f07d391-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.146473 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfnr\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-kube-api-access-xnfnr\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.149676 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.582636 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"db99e06f-c263-4aef-b5c2-330eaed29fd4","Type":"ContainerStarted","Data":"dc3d90ec4b64edb7fcfddb52e0dc45a9291c264b7577b98db0144f3178572620"} Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.585596 4897 generic.go:334] "Generic (PLEG): container finished" podID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerID="869f2f809289b61bb3c9f4d46b09243f2f07274542f71fcfbdcbfef9e1d0a516" exitCode=0 Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.585687 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-95sj7" event={"ID":"8b08a755-562e-41ee-9591-eb9cb3fcb3c2","Type":"ContainerDied","Data":"869f2f809289b61bb3c9f4d46b09243f2f07274542f71fcfbdcbfef9e1d0a516"} Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.585752 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-95sj7" event={"ID":"8b08a755-562e-41ee-9591-eb9cb3fcb3c2","Type":"ContainerStarted","Data":"68e32f883d3b6ce97d9aced65239f92e55762729c8eeff6d1463575648b0bf9e"} Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.587241 4897 generic.go:334] "Generic (PLEG): container finished" podID="995bc563-52dc-4755-b43f-96a2746d8bce" containerID="935b3c61f31ffdae58c116ebe75d4929e50f923fbd4e7ff29f14eb9da24b38c2" exitCode=0 Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.587263 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ch9bl" event={"ID":"995bc563-52dc-4755-b43f-96a2746d8bce","Type":"ContainerDied","Data":"935b3c61f31ffdae58c116ebe75d4929e50f923fbd4e7ff29f14eb9da24b38c2"} Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.602531 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gpcgs"] Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.603826 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.605714 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.611519 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.611743 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.621496 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gpcgs"] Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.642643 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-combined-ca-bundle\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.642730 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.642825 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-dispersionconf\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.642911 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41910cc3-f0b4-4e6d-9c2e-562794444c84-etc-swift\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.642950 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-ring-data-devices\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.642976 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-scripts\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.643037 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmwq\" (UniqueName: \"kubernetes.io/projected/41910cc3-f0b4-4e6d-9c2e-562794444c84-kube-api-access-9hmwq\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.643073 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-swiftconf\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: E0228 13:36:44.643477 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 13:36:44 crc kubenswrapper[4897]: E0228 13:36:44.643518 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 13:36:44 crc kubenswrapper[4897]: E0228 13:36:44.643576 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift podName:e07793a7-3e98-4a8d-bfb6-3c630f07d391 nodeName:}" failed. No retries permitted until 2026-02-28 13:36:45.643553067 +0000 UTC m=+1219.885873724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift") pod "swift-storage-0" (UID: "e07793a7-3e98-4a8d-bfb6-3c630f07d391") : configmap "swift-ring-files" not found Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.744229 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-combined-ca-bundle\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.744403 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-dispersionconf\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.744459 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41910cc3-f0b4-4e6d-9c2e-562794444c84-etc-swift\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.744488 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-ring-data-devices\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.744522 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-scripts\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.744558 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmwq\" (UniqueName: \"kubernetes.io/projected/41910cc3-f0b4-4e6d-9c2e-562794444c84-kube-api-access-9hmwq\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.744620 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-swiftconf\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.745772 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-ring-data-devices\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.746468 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41910cc3-f0b4-4e6d-9c2e-562794444c84-etc-swift\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.747954 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-scripts\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.750873 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-combined-ca-bundle\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.753791 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-swiftconf\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.754476 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-dispersionconf\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:44 crc kubenswrapper[4897]: I0228 13:36:44.764520 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmwq\" (UniqueName: \"kubernetes.io/projected/41910cc3-f0b4-4e6d-9c2e-562794444c84-kube-api-access-9hmwq\") pod \"swift-ring-rebalance-gpcgs\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.042275 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:36:45 crc kubenswrapper[4897]: E0228 13:36:45.459738 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.506283 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gpcgs"] Feb 28 13:36:45 crc kubenswrapper[4897]: W0228 13:36:45.506579 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41910cc3_f0b4_4e6d_9c2e_562794444c84.slice/crio-4983e140da39312c042d981ad977dc818bf66a2a96077fcad5c97bdb99bd0c02 WatchSource:0}: Error finding container 4983e140da39312c042d981ad977dc818bf66a2a96077fcad5c97bdb99bd0c02: Status 404 returned error can't find the container with id 4983e140da39312c042d981ad977dc818bf66a2a96077fcad5c97bdb99bd0c02 Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.598334 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-95sj7" event={"ID":"8b08a755-562e-41ee-9591-eb9cb3fcb3c2","Type":"ContainerStarted","Data":"15b08ece46d11ef4f9d673a24b8c7454d790c138022240d86638a7e9fc43e830"} Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.599298 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.605576 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ch9bl" event={"ID":"995bc563-52dc-4755-b43f-96a2746d8bce","Type":"ContainerStarted","Data":"22f43917f45e2aeb93aabe6cd45af354a3e359eb1ac774868f22f04d2838baa0"} Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.605632 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ch9bl" event={"ID":"995bc563-52dc-4755-b43f-96a2746d8bce","Type":"ContainerStarted","Data":"bfb698321b9d6790472d2748cafe1816101729e028a6fc91c0b7b2e9437ce15f"} Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.606643 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.606711 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.607911 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gpcgs" event={"ID":"41910cc3-f0b4-4e6d-9c2e-562794444c84","Type":"ContainerStarted","Data":"4983e140da39312c042d981ad977dc818bf66a2a96077fcad5c97bdb99bd0c02"} Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.622689 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c566df67-95sj7" podStartSLOduration=3.622671759 podStartE2EDuration="3.622671759s" podCreationTimestamp="2026-02-28 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:36:45.615485205 +0000 UTC m=+1219.857805862" watchObservedRunningTime="2026-02-28 13:36:45.622671759 +0000 UTC m=+1219.864992416" Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.646872 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ch9bl" podStartSLOduration=12.278463724 podStartE2EDuration="39.646855316s" podCreationTimestamp="2026-02-28 13:36:06 +0000 UTC" firstStartedPulling="2026-02-28 13:36:15.222372242 +0000 UTC m=+1189.464692899" lastFinishedPulling="2026-02-28 13:36:42.590763824 +0000 UTC m=+1216.833084491" observedRunningTime="2026-02-28 13:36:45.638626742 +0000 UTC m=+1219.880947429" watchObservedRunningTime="2026-02-28 13:36:45.646855316 +0000 UTC m=+1219.889175973" Feb 28 13:36:45 crc kubenswrapper[4897]: I0228 13:36:45.674965 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:45 crc kubenswrapper[4897]: E0228 13:36:45.675430 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 13:36:45 crc kubenswrapper[4897]: E0228 13:36:45.675467 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 13:36:45 crc kubenswrapper[4897]: E0228 13:36:45.675551 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift podName:e07793a7-3e98-4a8d-bfb6-3c630f07d391 nodeName:}" failed. No retries permitted until 2026-02-28 13:36:47.6755205 +0000 UTC m=+1221.917841228 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift") pod "swift-storage-0" (UID: "e07793a7-3e98-4a8d-bfb6-3c630f07d391") : configmap "swift-ring-files" not found Feb 28 13:36:47 crc kubenswrapper[4897]: I0228 13:36:47.710449 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:47 crc kubenswrapper[4897]: E0228 13:36:47.710628 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 13:36:47 crc kubenswrapper[4897]: E0228 13:36:47.710920 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 13:36:47 crc kubenswrapper[4897]: E0228 13:36:47.711378 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift podName:e07793a7-3e98-4a8d-bfb6-3c630f07d391 nodeName:}" failed. No retries permitted until 2026-02-28 13:36:51.711335341 +0000 UTC m=+1225.953656008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift") pod "swift-storage-0" (UID: "e07793a7-3e98-4a8d-bfb6-3c630f07d391") : configmap "swift-ring-files" not found Feb 28 13:36:50 crc kubenswrapper[4897]: I0228 13:36:50.652648 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"03ffdd06-e63d-4a43-96f0-92e2d0e3a89d","Type":"ContainerStarted","Data":"cfce89baa10341fb37b1e1776f5d246886a63d6ccd4ce5e5268716a56dfb39e3"} Feb 28 13:36:50 crc kubenswrapper[4897]: I0228 13:36:50.656101 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"48d78132-b30d-4c29-8137-7af1597f8cc6","Type":"ContainerStarted","Data":"059bed25626c120e983c38ceab9aa176b39752e1f017a7d438f9483faf5c1c4a"} Feb 28 13:36:50 crc kubenswrapper[4897]: I0228 13:36:50.658231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gpcgs" event={"ID":"41910cc3-f0b4-4e6d-9c2e-562794444c84","Type":"ContainerStarted","Data":"9e8a0c5f2b31b675a64512b41b8bf1bf773ea54576e02168c7e4c7ff24acae11"} Feb 28 13:36:50 crc kubenswrapper[4897]: I0228 13:36:50.691953 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.774091934 podStartE2EDuration="45.691935348s" podCreationTimestamp="2026-02-28 13:36:05 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.736402747 +0000 UTC m=+1188.978723404" lastFinishedPulling="2026-02-28 13:36:49.654246161 +0000 UTC m=+1223.896566818" observedRunningTime="2026-02-28 13:36:50.685930008 +0000 UTC m=+1224.928250675" watchObservedRunningTime="2026-02-28 13:36:50.691935348 +0000 UTC m=+1224.934256015" Feb 28 13:36:50 crc kubenswrapper[4897]: I0228 13:36:50.715367 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gpcgs" podStartSLOduration=2.571550643 podStartE2EDuration="6.715302252s" podCreationTimestamp="2026-02-28 13:36:44 +0000 UTC" firstStartedPulling="2026-02-28 13:36:45.51041996 +0000 UTC m=+1219.752740627" lastFinishedPulling="2026-02-28 13:36:49.654171579 +0000 UTC m=+1223.896492236" observedRunningTime="2026-02-28 13:36:50.709758885 +0000 UTC m=+1224.952079552" watchObservedRunningTime="2026-02-28 13:36:50.715302252 +0000 UTC m=+1224.957622909" Feb 28 13:36:50 crc kubenswrapper[4897]: I0228 13:36:50.739088 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.175867817 podStartE2EDuration="42.739060857s" podCreationTimestamp="2026-02-28 13:36:08 +0000 UTC" firstStartedPulling="2026-02-28 13:36:15.092449661 +0000 UTC m=+1189.334770328" lastFinishedPulling="2026-02-28 13:36:49.655642711 +0000 UTC m=+1223.897963368" observedRunningTime="2026-02-28 13:36:50.734747264 +0000 UTC m=+1224.977067961" watchObservedRunningTime="2026-02-28 13:36:50.739060857 +0000 UTC m=+1224.981381524" Feb 28 13:36:51 crc kubenswrapper[4897]: I0228 13:36:51.517962 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:51 crc kubenswrapper[4897]: I0228 13:36:51.518349 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:51 crc kubenswrapper[4897]: I0228 13:36:51.793228 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:51 crc kubenswrapper[4897]: E0228 13:36:51.793841 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 13:36:51 crc kubenswrapper[4897]: E0228 13:36:51.793862 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 13:36:51 crc kubenswrapper[4897]: E0228 13:36:51.793906 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift podName:e07793a7-3e98-4a8d-bfb6-3c630f07d391 nodeName:}" failed. No retries permitted until 2026-02-28 13:36:59.793889841 +0000 UTC m=+1234.036210508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift") pod "swift-storage-0" (UID: "e07793a7-3e98-4a8d-bfb6-3c630f07d391") : configmap "swift-ring-files" not found Feb 28 13:36:52 crc kubenswrapper[4897]: I0228 13:36:52.527940 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:52 crc kubenswrapper[4897]: I0228 13:36:52.619431 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:52 crc kubenswrapper[4897]: I0228 13:36:52.677550 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:52 crc kubenswrapper[4897]: I0228 13:36:52.860758 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.163228 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.215583 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-2bj5z"] Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.215807 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" podUID="8fc42472-3941-42bf-bab4-ca05277cb6cf" containerName="dnsmasq-dns" containerID="cri-o://9dee223eefb6b4d79a339b608d6a8789533eeff1e8627be46d2373a28f22b7b8" gracePeriod=10 Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.685672 4897 generic.go:334] "Generic (PLEG): container finished" podID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerID="dc3d90ec4b64edb7fcfddb52e0dc45a9291c264b7577b98db0144f3178572620" exitCode=0 Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.685836 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"db99e06f-c263-4aef-b5c2-330eaed29fd4","Type":"ContainerDied","Data":"dc3d90ec4b64edb7fcfddb52e0dc45a9291c264b7577b98db0144f3178572620"} Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.687695 4897 generic.go:334] "Generic (PLEG): container finished" podID="d7f297ea-652d-47ae-9831-fad10c6127ad" containerID="5282a74ea6c5f9d4361fd6e9883b6c4446f1f5c618ee23ada2ec4b213b322d48" exitCode=0 Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.687730 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d7f297ea-652d-47ae-9831-fad10c6127ad","Type":"ContainerDied","Data":"5282a74ea6c5f9d4361fd6e9883b6c4446f1f5c618ee23ada2ec4b213b322d48"} Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.695015 4897 generic.go:334] "Generic (PLEG): container finished" podID="8fc42472-3941-42bf-bab4-ca05277cb6cf" containerID="9dee223eefb6b4d79a339b608d6a8789533eeff1e8627be46d2373a28f22b7b8" exitCode=0 Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.695201 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" event={"ID":"8fc42472-3941-42bf-bab4-ca05277cb6cf","Type":"ContainerDied","Data":"9dee223eefb6b4d79a339b608d6a8789533eeff1e8627be46d2373a28f22b7b8"} Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.695373 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" event={"ID":"8fc42472-3941-42bf-bab4-ca05277cb6cf","Type":"ContainerDied","Data":"1ec4005d1d6dbca98542431c26bf2cf259e6f60ec74b650833a013d04066f2c7"} Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.695397 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ec4005d1d6dbca98542431c26bf2cf259e6f60ec74b650833a013d04066f2c7" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.795682 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.835795 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-sb\") pod \"8fc42472-3941-42bf-bab4-ca05277cb6cf\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.835861 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h9tv\" (UniqueName: \"kubernetes.io/projected/8fc42472-3941-42bf-bab4-ca05277cb6cf-kube-api-access-7h9tv\") pod \"8fc42472-3941-42bf-bab4-ca05277cb6cf\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.835890 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-nb\") pod \"8fc42472-3941-42bf-bab4-ca05277cb6cf\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.835917 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-config\") pod \"8fc42472-3941-42bf-bab4-ca05277cb6cf\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.835983 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-dns-svc\") pod \"8fc42472-3941-42bf-bab4-ca05277cb6cf\" (UID: \"8fc42472-3941-42bf-bab4-ca05277cb6cf\") " Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.894802 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc42472-3941-42bf-bab4-ca05277cb6cf-kube-api-access-7h9tv" (OuterVolumeSpecName: "kube-api-access-7h9tv") pod "8fc42472-3941-42bf-bab4-ca05277cb6cf" (UID: "8fc42472-3941-42bf-bab4-ca05277cb6cf"). InnerVolumeSpecName "kube-api-access-7h9tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.942135 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h9tv\" (UniqueName: \"kubernetes.io/projected/8fc42472-3941-42bf-bab4-ca05277cb6cf-kube-api-access-7h9tv\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.970888 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8fc42472-3941-42bf-bab4-ca05277cb6cf" (UID: "8fc42472-3941-42bf-bab4-ca05277cb6cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.971458 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8fc42472-3941-42bf-bab4-ca05277cb6cf" (UID: "8fc42472-3941-42bf-bab4-ca05277cb6cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.979355 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-config" (OuterVolumeSpecName: "config") pod "8fc42472-3941-42bf-bab4-ca05277cb6cf" (UID: "8fc42472-3941-42bf-bab4-ca05277cb6cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:53 crc kubenswrapper[4897]: I0228 13:36:53.983233 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fc42472-3941-42bf-bab4-ca05277cb6cf" (UID: "8fc42472-3941-42bf-bab4-ca05277cb6cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.042825 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.042858 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.042869 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.042878 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fc42472-3941-42bf-bab4-ca05277cb6cf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.592469 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.683334 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.707045 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"db99e06f-c263-4aef-b5c2-330eaed29fd4","Type":"ContainerStarted","Data":"88b4ca98050c385fe009d5d3696d8ed9c37f8927cc8a12bc0f307cc0519476f0"} Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.709253 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d7f297ea-652d-47ae-9831-fad10c6127ad","Type":"ContainerStarted","Data":"c6ede32136c568e1c4f1e5a3ba8a001e3480bc64dd5460151f1a6f48a4f2a69f"} Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.709375 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599f5467c5-2bj5z" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.788767 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371979.066025 podStartE2EDuration="57.788751304s" podCreationTimestamp="2026-02-28 13:35:57 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.352161162 +0000 UTC m=+1188.594481819" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:36:54.781762645 +0000 UTC m=+1229.024083302" watchObservedRunningTime="2026-02-28 13:36:54.788751304 +0000 UTC m=+1229.031071961" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.793578 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.784024095 podStartE2EDuration="55.79356119s" podCreationTimestamp="2026-02-28 13:35:59 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.58299877 +0000 UTC m=+1188.825319427" lastFinishedPulling="2026-02-28 13:36:42.592535855 +0000 UTC m=+1216.834856522" observedRunningTime="2026-02-28 13:36:54.758139514 +0000 UTC m=+1229.000460201" watchObservedRunningTime="2026-02-28 13:36:54.79356119 +0000 UTC m=+1229.035881847" Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.811785 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-2bj5z"] Feb 28 13:36:54 crc kubenswrapper[4897]: I0228 13:36:54.822528 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-2bj5z"] Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.586126 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.815695 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 28 13:36:55 crc kubenswrapper[4897]: E0228 13:36:55.816432 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc42472-3941-42bf-bab4-ca05277cb6cf" containerName="init" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.816455 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc42472-3941-42bf-bab4-ca05277cb6cf" containerName="init" Feb 28 13:36:55 crc kubenswrapper[4897]: E0228 13:36:55.816483 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc42472-3941-42bf-bab4-ca05277cb6cf" containerName="dnsmasq-dns" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.816492 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc42472-3941-42bf-bab4-ca05277cb6cf" containerName="dnsmasq-dns" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.818384 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc42472-3941-42bf-bab4-ca05277cb6cf" containerName="dnsmasq-dns" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.821067 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.831620 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-f4bzq" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.831802 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.831913 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.831934 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.832202 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.979670 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6xg\" (UniqueName: \"kubernetes.io/projected/f3afe36e-988c-4fca-8ca8-c24353046ea7-kube-api-access-zv6xg\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.979711 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3afe36e-988c-4fca-8ca8-c24353046ea7-config\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.979763 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.979786 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.979817 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3afe36e-988c-4fca-8ca8-c24353046ea7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.980072 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:55 crc kubenswrapper[4897]: I0228 13:36:55.980163 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3afe36e-988c-4fca-8ca8-c24353046ea7-scripts\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.081960 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.082004 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.082034 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3afe36e-988c-4fca-8ca8-c24353046ea7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.082132 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.082182 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3afe36e-988c-4fca-8ca8-c24353046ea7-scripts\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.082208 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3afe36e-988c-4fca-8ca8-c24353046ea7-config\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.082223 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6xg\" (UniqueName: \"kubernetes.io/projected/f3afe36e-988c-4fca-8ca8-c24353046ea7-kube-api-access-zv6xg\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.082870 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3afe36e-988c-4fca-8ca8-c24353046ea7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.083386 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3afe36e-988c-4fca-8ca8-c24353046ea7-scripts\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.083785 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3afe36e-988c-4fca-8ca8-c24353046ea7-config\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.094332 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.094497 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.097438 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3afe36e-988c-4fca-8ca8-c24353046ea7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.098574 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6xg\" (UniqueName: \"kubernetes.io/projected/f3afe36e-988c-4fca-8ca8-c24353046ea7-kube-api-access-zv6xg\") pod \"ovn-northd-0\" (UID: \"f3afe36e-988c-4fca-8ca8-c24353046ea7\") " pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.151744 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.465463 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fc42472-3941-42bf-bab4-ca05277cb6cf" path="/var/lib/kubelet/pods/8fc42472-3941-42bf-bab4-ca05277cb6cf/volumes" Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.589938 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.720999 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f3afe36e-988c-4fca-8ca8-c24353046ea7","Type":"ContainerStarted","Data":"ec40ca9d9c1a165643e2c07112ed3297c6ed282faf460555f1e326a98e0e4b53"} Feb 28 13:36:56 crc kubenswrapper[4897]: I0228 13:36:56.722572 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerStarted","Data":"c79d8a8e4035f7ad944a9279ea16c6a346d115234f441fbc2b9c154734a097d5"} Feb 28 13:36:57 crc kubenswrapper[4897]: I0228 13:36:57.739643 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f3afe36e-988c-4fca-8ca8-c24353046ea7","Type":"ContainerStarted","Data":"9f27de8e88bd7c382bc7fea64fb8a61edbf397451f3419ba082e17caf78a5cbd"} Feb 28 13:36:57 crc kubenswrapper[4897]: I0228 13:36:57.740231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f3afe36e-988c-4fca-8ca8-c24353046ea7","Type":"ContainerStarted","Data":"1917328d72b8a7ac510fee0601c0c2f44746b146880d6078c56f071d45a0f1b6"} Feb 28 13:36:57 crc kubenswrapper[4897]: I0228 13:36:57.741220 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 28 13:36:57 crc kubenswrapper[4897]: I0228 13:36:57.761509 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.167906266 podStartE2EDuration="2.761496688s" podCreationTimestamp="2026-02-28 13:36:55 +0000 UTC" firstStartedPulling="2026-02-28 13:36:56.594891589 +0000 UTC m=+1230.837212246" lastFinishedPulling="2026-02-28 13:36:57.188482021 +0000 UTC m=+1231.430802668" observedRunningTime="2026-02-28 13:36:57.758347589 +0000 UTC m=+1232.000668246" watchObservedRunningTime="2026-02-28 13:36:57.761496688 +0000 UTC m=+1232.003817345" Feb 28 13:36:58 crc kubenswrapper[4897]: I0228 13:36:58.755109 4897 generic.go:334] "Generic (PLEG): container finished" podID="41910cc3-f0b4-4e6d-9c2e-562794444c84" containerID="9e8a0c5f2b31b675a64512b41b8bf1bf773ea54576e02168c7e4c7ff24acae11" exitCode=0 Feb 28 13:36:58 crc kubenswrapper[4897]: I0228 13:36:58.755378 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gpcgs" event={"ID":"41910cc3-f0b4-4e6d-9c2e-562794444c84","Type":"ContainerDied","Data":"9e8a0c5f2b31b675a64512b41b8bf1bf773ea54576e02168c7e4c7ff24acae11"} Feb 28 13:36:58 crc kubenswrapper[4897]: I0228 13:36:58.947083 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 28 13:36:58 crc kubenswrapper[4897]: I0228 13:36:58.947410 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 28 13:36:59 crc kubenswrapper[4897]: I0228 13:36:59.077182 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 28 13:36:59 crc kubenswrapper[4897]: I0228 13:36:59.845154 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:59 crc kubenswrapper[4897]: I0228 13:36:59.858730 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e07793a7-3e98-4a8d-bfb6-3c630f07d391-etc-swift\") pod \"swift-storage-0\" (UID: \"e07793a7-3e98-4a8d-bfb6-3c630f07d391\") " pod="openstack/swift-storage-0" Feb 28 13:36:59 crc kubenswrapper[4897]: I0228 13:36:59.906561 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 28 13:36:59 crc kubenswrapper[4897]: E0228 13:36:59.964959 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:36:59 crc kubenswrapper[4897]: E0228 13:36:59.965610 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wpnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-29kqk_openshift-marketplace(dbe86f80-68e4-4170-8801-cea07c362d5c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:36:59 crc kubenswrapper[4897]: E0228 13:36:59.966915 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:36:59 crc kubenswrapper[4897]: I0228 13:36:59.971521 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.283638 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.361795 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hmwq\" (UniqueName: \"kubernetes.io/projected/41910cc3-f0b4-4e6d-9c2e-562794444c84-kube-api-access-9hmwq\") pod \"41910cc3-f0b4-4e6d-9c2e-562794444c84\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.361899 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-swiftconf\") pod \"41910cc3-f0b4-4e6d-9c2e-562794444c84\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.361940 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41910cc3-f0b4-4e6d-9c2e-562794444c84-etc-swift\") pod \"41910cc3-f0b4-4e6d-9c2e-562794444c84\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.361971 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-dispersionconf\") pod \"41910cc3-f0b4-4e6d-9c2e-562794444c84\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.362051 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-scripts\") pod \"41910cc3-f0b4-4e6d-9c2e-562794444c84\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.362081 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-ring-data-devices\") pod \"41910cc3-f0b4-4e6d-9c2e-562794444c84\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.362113 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-combined-ca-bundle\") pod \"41910cc3-f0b4-4e6d-9c2e-562794444c84\" (UID: \"41910cc3-f0b4-4e6d-9c2e-562794444c84\") " Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.366165 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41910cc3-f0b4-4e6d-9c2e-562794444c84-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "41910cc3-f0b4-4e6d-9c2e-562794444c84" (UID: "41910cc3-f0b4-4e6d-9c2e-562794444c84"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.366877 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "41910cc3-f0b4-4e6d-9c2e-562794444c84" (UID: "41910cc3-f0b4-4e6d-9c2e-562794444c84"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.376660 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41910cc3-f0b4-4e6d-9c2e-562794444c84-kube-api-access-9hmwq" (OuterVolumeSpecName: "kube-api-access-9hmwq") pod "41910cc3-f0b4-4e6d-9c2e-562794444c84" (UID: "41910cc3-f0b4-4e6d-9c2e-562794444c84"). InnerVolumeSpecName "kube-api-access-9hmwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.400502 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "41910cc3-f0b4-4e6d-9c2e-562794444c84" (UID: "41910cc3-f0b4-4e6d-9c2e-562794444c84"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.439395 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.440418 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.444096 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41910cc3-f0b4-4e6d-9c2e-562794444c84" (UID: "41910cc3-f0b4-4e6d-9c2e-562794444c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.455236 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "41910cc3-f0b4-4e6d-9c2e-562794444c84" (UID: "41910cc3-f0b4-4e6d-9c2e-562794444c84"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.461074 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-scripts" (OuterVolumeSpecName: "scripts") pod "41910cc3-f0b4-4e6d-9c2e-562794444c84" (UID: "41910cc3-f0b4-4e6d-9c2e-562794444c84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.465440 4897 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41910cc3-f0b4-4e6d-9c2e-562794444c84-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.465466 4897 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.465476 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.465485 4897 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41910cc3-f0b4-4e6d-9c2e-562794444c84-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.465493 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.465501 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hmwq\" (UniqueName: \"kubernetes.io/projected/41910cc3-f0b4-4e6d-9c2e-562794444c84-kube-api-access-9hmwq\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.465511 4897 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41910cc3-f0b4-4e6d-9c2e-562794444c84-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.624876 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 28 13:37:00 crc kubenswrapper[4897]: W0228 13:37:00.627838 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode07793a7_3e98_4a8d_bfb6_3c630f07d391.slice/crio-784e444d520fc5e560d411ad260e5dd3001b434f2e02459e784a3577ab2c7a02 WatchSource:0}: Error finding container 784e444d520fc5e560d411ad260e5dd3001b434f2e02459e784a3577ab2c7a02: Status 404 returned error can't find the container with id 784e444d520fc5e560d411ad260e5dd3001b434f2e02459e784a3577ab2c7a02 Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.773445 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gpcgs" event={"ID":"41910cc3-f0b4-4e6d-9c2e-562794444c84","Type":"ContainerDied","Data":"4983e140da39312c042d981ad977dc818bf66a2a96077fcad5c97bdb99bd0c02"} Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.773496 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4983e140da39312c042d981ad977dc818bf66a2a96077fcad5c97bdb99bd0c02" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.773509 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gpcgs" Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.777133 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"784e444d520fc5e560d411ad260e5dd3001b434f2e02459e784a3577ab2c7a02"} Feb 28 13:37:00 crc kubenswrapper[4897]: I0228 13:37:00.951393 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 28 13:37:01 crc kubenswrapper[4897]: E0228 13:37:01.273892 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:37:01 crc kubenswrapper[4897]: E0228 13:37:01.274248 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:37:01 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:37:01 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqr7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538096-ws9qt_openshift-infra(6e94c0b2-21a6-496c-8188-dfcaf0d66b2b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:37:01 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:37:01 crc kubenswrapper[4897]: E0228 13:37:01.275431 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.786377 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"fb8e12ba3db2484e5465009ebe17c233a5dc4b1be512302cdcb09bb7cd476d2e"} Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.786686 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"a36c7625509f52bd0c943a71884e4e8a5b068bff4d1387dab716888d5bea56e7"} Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.786827 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7ndv6"] Feb 28 13:37:01 crc kubenswrapper[4897]: E0228 13:37:01.787169 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41910cc3-f0b4-4e6d-9c2e-562794444c84" containerName="swift-ring-rebalance" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.787190 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="41910cc3-f0b4-4e6d-9c2e-562794444c84" containerName="swift-ring-rebalance" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.787364 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="41910cc3-f0b4-4e6d-9c2e-562794444c84" containerName="swift-ring-rebalance" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.787947 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.791568 4897 generic.go:334] "Generic (PLEG): container finished" podID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerID="c79d8a8e4035f7ad944a9279ea16c6a346d115234f441fbc2b9c154734a097d5" exitCode=0 Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.792943 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerDied","Data":"c79d8a8e4035f7ad944a9279ea16c6a346d115234f441fbc2b9c154734a097d5"} Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.813914 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7ndv6"] Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.840756 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bfa7-account-create-update-8mjtn"] Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.841892 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.843867 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.846951 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bfa7-account-create-update-8mjtn"] Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.888757 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-operator-scripts\") pod \"keystone-db-create-7ndv6\" (UID: \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\") " pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.889167 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7fgn\" (UniqueName: \"kubernetes.io/projected/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-kube-api-access-j7fgn\") pod \"keystone-db-create-7ndv6\" (UID: \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\") " pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.889319 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svg9c\" (UniqueName: \"kubernetes.io/projected/09ce242f-6ba6-48c7-9c41-e00c21dfb085-kube-api-access-svg9c\") pod \"keystone-bfa7-account-create-update-8mjtn\" (UID: \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\") " pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.889491 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ce242f-6ba6-48c7-9c41-e00c21dfb085-operator-scripts\") pod \"keystone-bfa7-account-create-update-8mjtn\" (UID: \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\") " pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.968995 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zqjtm"] Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.969969 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.970068 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.988101 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zqjtm"] Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.990805 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-operator-scripts\") pod \"placement-db-create-zqjtm\" (UID: \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\") " pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.990882 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-operator-scripts\") pod \"keystone-db-create-7ndv6\" (UID: \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\") " pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.990939 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7fgn\" (UniqueName: \"kubernetes.io/projected/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-kube-api-access-j7fgn\") pod \"keystone-db-create-7ndv6\" (UID: \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\") " pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.990964 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svg9c\" (UniqueName: \"kubernetes.io/projected/09ce242f-6ba6-48c7-9c41-e00c21dfb085-kube-api-access-svg9c\") pod \"keystone-bfa7-account-create-update-8mjtn\" (UID: \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\") " pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.990998 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ce242f-6ba6-48c7-9c41-e00c21dfb085-operator-scripts\") pod \"keystone-bfa7-account-create-update-8mjtn\" (UID: \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\") " pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.991024 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xww8c\" (UniqueName: \"kubernetes.io/projected/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-kube-api-access-xww8c\") pod \"placement-db-create-zqjtm\" (UID: \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\") " pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.992235 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-operator-scripts\") pod \"keystone-db-create-7ndv6\" (UID: \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\") " pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:01 crc kubenswrapper[4897]: I0228 13:37:01.992538 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ce242f-6ba6-48c7-9c41-e00c21dfb085-operator-scripts\") pod \"keystone-bfa7-account-create-update-8mjtn\" (UID: \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\") " pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.006704 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7a2e-account-create-update-dsfp4"] Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.007737 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.009989 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.020696 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svg9c\" (UniqueName: \"kubernetes.io/projected/09ce242f-6ba6-48c7-9c41-e00c21dfb085-kube-api-access-svg9c\") pod \"keystone-bfa7-account-create-update-8mjtn\" (UID: \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\") " pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.021175 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7a2e-account-create-update-dsfp4"] Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.023022 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7fgn\" (UniqueName: \"kubernetes.io/projected/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-kube-api-access-j7fgn\") pod \"keystone-db-create-7ndv6\" (UID: \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\") " pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.096978 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-operator-scripts\") pod \"placement-db-create-zqjtm\" (UID: \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\") " pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.097072 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a4b26d-e794-43fa-991d-55679de18394-operator-scripts\") pod \"placement-7a2e-account-create-update-dsfp4\" (UID: \"47a4b26d-e794-43fa-991d-55679de18394\") " pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.097149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb7f\" (UniqueName: \"kubernetes.io/projected/47a4b26d-e794-43fa-991d-55679de18394-kube-api-access-5kb7f\") pod \"placement-7a2e-account-create-update-dsfp4\" (UID: \"47a4b26d-e794-43fa-991d-55679de18394\") " pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.097246 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xww8c\" (UniqueName: \"kubernetes.io/projected/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-kube-api-access-xww8c\") pod \"placement-db-create-zqjtm\" (UID: \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\") " pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.098077 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-operator-scripts\") pod \"placement-db-create-zqjtm\" (UID: \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\") " pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.114902 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.115856 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xww8c\" (UniqueName: \"kubernetes.io/projected/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-kube-api-access-xww8c\") pod \"placement-db-create-zqjtm\" (UID: \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\") " pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.200024 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kb7f\" (UniqueName: \"kubernetes.io/projected/47a4b26d-e794-43fa-991d-55679de18394-kube-api-access-5kb7f\") pod \"placement-7a2e-account-create-update-dsfp4\" (UID: \"47a4b26d-e794-43fa-991d-55679de18394\") " pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.202468 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a4b26d-e794-43fa-991d-55679de18394-operator-scripts\") pod \"placement-7a2e-account-create-update-dsfp4\" (UID: \"47a4b26d-e794-43fa-991d-55679de18394\") " pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.205914 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.206182 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a4b26d-e794-43fa-991d-55679de18394-operator-scripts\") pod \"placement-7a2e-account-create-update-dsfp4\" (UID: \"47a4b26d-e794-43fa-991d-55679de18394\") " pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.226817 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kb7f\" (UniqueName: \"kubernetes.io/projected/47a4b26d-e794-43fa-991d-55679de18394-kube-api-access-5kb7f\") pod \"placement-7a2e-account-create-update-dsfp4\" (UID: \"47a4b26d-e794-43fa-991d-55679de18394\") " pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.411034 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.411783 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.549606 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7ndv6"] Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.665371 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bfa7-account-create-update-8mjtn"] Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.804962 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"27bb5aa173714a327df734f64b915a1bc222e66ced5fce46d0ea5ae5820985ad"} Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.805005 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"b1d11105fcc3189ef82cb3da8c6a6991fca7ca11cf7afaa45b243a4c1d46791c"} Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.807675 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7ndv6" event={"ID":"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee","Type":"ContainerStarted","Data":"d39c85695f56fb914f4c34d3d1d208031dce37ad7f3ffaa5b0665889f5e305fc"} Feb 28 13:37:02 crc kubenswrapper[4897]: I0228 13:37:02.898415 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zqjtm"] Feb 28 13:37:02 crc kubenswrapper[4897]: W0228 13:37:02.916730 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09ce242f_6ba6_48c7_9c41_e00c21dfb085.slice/crio-66e930645a5b8c66084b4b0102efa390966c875691d7e5578572a906b370dbdb WatchSource:0}: Error finding container 66e930645a5b8c66084b4b0102efa390966c875691d7e5578572a906b370dbdb: Status 404 returned error can't find the container with id 66e930645a5b8c66084b4b0102efa390966c875691d7e5578572a906b370dbdb Feb 28 13:37:02 crc kubenswrapper[4897]: W0228 13:37:02.918684 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0e4700f_f6cc_44bb_93b6_a58f1e74b0df.slice/crio-433c605598bdef4de84348ec26a435d75c2f5c71b49b0492a436c65271e2420f WatchSource:0}: Error finding container 433c605598bdef4de84348ec26a435d75c2f5c71b49b0492a436c65271e2420f: Status 404 returned error can't find the container with id 433c605598bdef4de84348ec26a435d75c2f5c71b49b0492a436c65271e2420f Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.018396 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-sdxh6"] Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.019703 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.030598 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-sdxh6"] Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.051242 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-90a5-account-create-update-6bzw4"] Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.052714 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.055412 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.061176 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-90a5-account-create-update-6bzw4"] Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.087373 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7a2e-account-create-update-dsfp4"] Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.122594 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f47bf46-033d-4191-a8a5-45ba3bc854e4-operator-scripts\") pod \"watcher-db-create-sdxh6\" (UID: \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\") " pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.122636 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpjc\" (UniqueName: \"kubernetes.io/projected/2f47bf46-033d-4191-a8a5-45ba3bc854e4-kube-api-access-vvpjc\") pod \"watcher-db-create-sdxh6\" (UID: \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\") " pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.122683 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhkm4\" (UniqueName: \"kubernetes.io/projected/fa284414-c19d-466b-a36e-6873e0e3c200-kube-api-access-mhkm4\") pod \"watcher-90a5-account-create-update-6bzw4\" (UID: \"fa284414-c19d-466b-a36e-6873e0e3c200\") " pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.122703 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa284414-c19d-466b-a36e-6873e0e3c200-operator-scripts\") pod \"watcher-90a5-account-create-update-6bzw4\" (UID: \"fa284414-c19d-466b-a36e-6873e0e3c200\") " pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.224213 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvpjc\" (UniqueName: \"kubernetes.io/projected/2f47bf46-033d-4191-a8a5-45ba3bc854e4-kube-api-access-vvpjc\") pod \"watcher-db-create-sdxh6\" (UID: \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\") " pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.224282 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhkm4\" (UniqueName: \"kubernetes.io/projected/fa284414-c19d-466b-a36e-6873e0e3c200-kube-api-access-mhkm4\") pod \"watcher-90a5-account-create-update-6bzw4\" (UID: \"fa284414-c19d-466b-a36e-6873e0e3c200\") " pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.224317 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa284414-c19d-466b-a36e-6873e0e3c200-operator-scripts\") pod \"watcher-90a5-account-create-update-6bzw4\" (UID: \"fa284414-c19d-466b-a36e-6873e0e3c200\") " pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.224421 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f47bf46-033d-4191-a8a5-45ba3bc854e4-operator-scripts\") pod \"watcher-db-create-sdxh6\" (UID: \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\") " pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.225110 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f47bf46-033d-4191-a8a5-45ba3bc854e4-operator-scripts\") pod \"watcher-db-create-sdxh6\" (UID: \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\") " pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.225973 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa284414-c19d-466b-a36e-6873e0e3c200-operator-scripts\") pod \"watcher-90a5-account-create-update-6bzw4\" (UID: \"fa284414-c19d-466b-a36e-6873e0e3c200\") " pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.250495 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhkm4\" (UniqueName: \"kubernetes.io/projected/fa284414-c19d-466b-a36e-6873e0e3c200-kube-api-access-mhkm4\") pod \"watcher-90a5-account-create-update-6bzw4\" (UID: \"fa284414-c19d-466b-a36e-6873e0e3c200\") " pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.250919 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvpjc\" (UniqueName: \"kubernetes.io/projected/2f47bf46-033d-4191-a8a5-45ba3bc854e4-kube-api-access-vvpjc\") pod \"watcher-db-create-sdxh6\" (UID: \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\") " pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.399711 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.406691 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.820593 4897 generic.go:334] "Generic (PLEG): container finished" podID="09ce242f-6ba6-48c7-9c41-e00c21dfb085" containerID="8a1c6ca9133cd43b4cd58f386b33a8fd2276468706580142c148e7b3d3b6d5b3" exitCode=0 Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.820824 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bfa7-account-create-update-8mjtn" event={"ID":"09ce242f-6ba6-48c7-9c41-e00c21dfb085","Type":"ContainerDied","Data":"8a1c6ca9133cd43b4cd58f386b33a8fd2276468706580142c148e7b3d3b6d5b3"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.821443 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bfa7-account-create-update-8mjtn" event={"ID":"09ce242f-6ba6-48c7-9c41-e00c21dfb085","Type":"ContainerStarted","Data":"66e930645a5b8c66084b4b0102efa390966c875691d7e5578572a906b370dbdb"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.823905 4897 generic.go:334] "Generic (PLEG): container finished" podID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerID="fe8050bd404884f66eddbd6adbe7f7bd94e5332f6f5879701dcd60a3e7709119" exitCode=0 Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.823972 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a792d6c-3a28-4775-87bf-b099ea550a00","Type":"ContainerDied","Data":"fe8050bd404884f66eddbd6adbe7f7bd94e5332f6f5879701dcd60a3e7709119"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.826495 4897 generic.go:334] "Generic (PLEG): container finished" podID="48885530-3df1-42cf-9c7f-2f86a21026a9" containerID="bbc6fa8ee8d3ac80c3c92093f967d5892e5318137535da47510c94b737137581" exitCode=0 Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.826558 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"48885530-3df1-42cf-9c7f-2f86a21026a9","Type":"ContainerDied","Data":"bbc6fa8ee8d3ac80c3c92093f967d5892e5318137535da47510c94b737137581"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.837750 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"c95fdad3b53853aedeca32ccdb55f79c94a3501c2fc698533dd790af790d1300"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.837797 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"8057038ec1c74a4c24dcb8eb0a9a23614dcac7ff4894bdbf7ee05c95d92918c9"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.837809 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"9abb1cd70136cab8a3187a362d51de8883b17bdfdb51ee8da221a78240599d6c"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.850868 4897 generic.go:334] "Generic (PLEG): container finished" podID="2dde6f07-ea2d-40ba-9a07-12fcc461a0ee" containerID="3d6e1fd1e3aa83214803795990dfb91e73e93414fd0c78f76546a886bffe3650" exitCode=0 Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.851027 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7ndv6" event={"ID":"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee","Type":"ContainerDied","Data":"3d6e1fd1e3aa83214803795990dfb91e73e93414fd0c78f76546a886bffe3650"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.868924 4897 generic.go:334] "Generic (PLEG): container finished" podID="47a4b26d-e794-43fa-991d-55679de18394" containerID="1e6d8dcf42007574e0c00f378d7ed248461634e59db3d46aa4f6565e590372f0" exitCode=0 Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.869034 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7a2e-account-create-update-dsfp4" event={"ID":"47a4b26d-e794-43fa-991d-55679de18394","Type":"ContainerDied","Data":"1e6d8dcf42007574e0c00f378d7ed248461634e59db3d46aa4f6565e590372f0"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.869069 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7a2e-account-create-update-dsfp4" event={"ID":"47a4b26d-e794-43fa-991d-55679de18394","Type":"ContainerStarted","Data":"f530ac606342ce77756d4c4c838915277372a51a002fe110cc10f4697abfd0c7"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.872208 4897 generic.go:334] "Generic (PLEG): container finished" podID="a0e4700f-f6cc-44bb-93b6-a58f1e74b0df" containerID="8e7f47d41ff2ce80d174e90b3f7e1a3208c77732bb9a7483eb927314057697c1" exitCode=0 Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.872291 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zqjtm" event={"ID":"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df","Type":"ContainerDied","Data":"8e7f47d41ff2ce80d174e90b3f7e1a3208c77732bb9a7483eb927314057697c1"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.872555 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zqjtm" event={"ID":"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df","Type":"ContainerStarted","Data":"433c605598bdef4de84348ec26a435d75c2f5c71b49b0492a436c65271e2420f"} Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.908767 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-sdxh6"] Feb 28 13:37:03 crc kubenswrapper[4897]: I0228 13:37:03.922122 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-90a5-account-create-update-6bzw4"] Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.888542 4897 generic.go:334] "Generic (PLEG): container finished" podID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerID="4e25f72a41edbd1b43773b05d08492b582421f5b717fe5a90ecfa8d2cb7b0d38" exitCode=0 Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.888861 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bf46d42-2d7e-410d-8a74-1ce12bb280b2","Type":"ContainerDied","Data":"4e25f72a41edbd1b43773b05d08492b582421f5b717fe5a90ecfa8d2cb7b0d38"} Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.895572 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"cc67502d9d262fdf553138dcd0dc90e91b9e122cad689c578ef09fed2f32da1f"} Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.897746 4897 generic.go:334] "Generic (PLEG): container finished" podID="2f47bf46-033d-4191-a8a5-45ba3bc854e4" containerID="eaf6b36b47c230f9601ab79a463ca9e43223cea6201564669a3581ebfb31f9f2" exitCode=0 Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.897862 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-sdxh6" event={"ID":"2f47bf46-033d-4191-a8a5-45ba3bc854e4","Type":"ContainerDied","Data":"eaf6b36b47c230f9601ab79a463ca9e43223cea6201564669a3581ebfb31f9f2"} Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.897913 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-sdxh6" event={"ID":"2f47bf46-033d-4191-a8a5-45ba3bc854e4","Type":"ContainerStarted","Data":"5b2e84f888157546a6f903a78c4d21277aef8782735b8f03a4d2f91e872353c7"} Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.899895 4897 generic.go:334] "Generic (PLEG): container finished" podID="fa284414-c19d-466b-a36e-6873e0e3c200" containerID="fdcd573f4b85fd76dcd6c1196e79a568976e7ef69af7e559867f25fe6d4e79e9" exitCode=0 Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.899982 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-90a5-account-create-update-6bzw4" event={"ID":"fa284414-c19d-466b-a36e-6873e0e3c200","Type":"ContainerDied","Data":"fdcd573f4b85fd76dcd6c1196e79a568976e7ef69af7e559867f25fe6d4e79e9"} Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.900025 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-90a5-account-create-update-6bzw4" event={"ID":"fa284414-c19d-466b-a36e-6873e0e3c200","Type":"ContainerStarted","Data":"3cbb8d46e6bf58e3c7b5eb7e1d766a02c3cccde7e1cdf8a1f9e02f06cc5c09e4"} Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.902515 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a792d6c-3a28-4775-87bf-b099ea550a00","Type":"ContainerStarted","Data":"7088677dc187a4128ab508e2ee1e1b9ad4c18d0a82798cb7cfcb8392f0127126"} Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.902750 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 28 13:37:04 crc kubenswrapper[4897]: I0228 13:37:04.905080 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"48885530-3df1-42cf-9c7f-2f86a21026a9","Type":"ContainerStarted","Data":"b1c2fcbcca26c4c9c4aa5cd2b8dcc2b48c06b1e2739ceab364e79ef8c1c9edf5"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.003823 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.044337673 podStartE2EDuration="1m10.003802585s" podCreationTimestamp="2026-02-28 13:35:55 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.340590294 +0000 UTC m=+1188.582910951" lastFinishedPulling="2026-02-28 13:36:29.300055216 +0000 UTC m=+1203.542375863" observedRunningTime="2026-02-28 13:37:04.999356528 +0000 UTC m=+1239.241677185" watchObservedRunningTime="2026-02-28 13:37:05.003802585 +0000 UTC m=+1239.246123242" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.039568 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=53.896876144 podStartE2EDuration="1m9.03955006s" podCreationTimestamp="2026-02-28 13:35:56 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.15704449 +0000 UTC m=+1188.399365147" lastFinishedPulling="2026-02-28 13:36:29.299718406 +0000 UTC m=+1203.542039063" observedRunningTime="2026-02-28 13:37:05.035245438 +0000 UTC m=+1239.277566105" watchObservedRunningTime="2026-02-28 13:37:05.03955006 +0000 UTC m=+1239.281870717" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.322109 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.462736 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svg9c\" (UniqueName: \"kubernetes.io/projected/09ce242f-6ba6-48c7-9c41-e00c21dfb085-kube-api-access-svg9c\") pod \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\" (UID: \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\") " Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.462909 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ce242f-6ba6-48c7-9c41-e00c21dfb085-operator-scripts\") pod \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\" (UID: \"09ce242f-6ba6-48c7-9c41-e00c21dfb085\") " Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.464035 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ce242f-6ba6-48c7-9c41-e00c21dfb085-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09ce242f-6ba6-48c7-9c41-e00c21dfb085" (UID: "09ce242f-6ba6-48c7-9c41-e00c21dfb085"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.467470 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ce242f-6ba6-48c7-9c41-e00c21dfb085-kube-api-access-svg9c" (OuterVolumeSpecName: "kube-api-access-svg9c") pod "09ce242f-6ba6-48c7-9c41-e00c21dfb085" (UID: "09ce242f-6ba6-48c7-9c41-e00c21dfb085"). InnerVolumeSpecName "kube-api-access-svg9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.490084 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.516574 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.554216 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.565104 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-operator-scripts\") pod \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\" (UID: \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\") " Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.565279 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xww8c\" (UniqueName: \"kubernetes.io/projected/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-kube-api-access-xww8c\") pod \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\" (UID: \"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df\") " Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.565626 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ce242f-6ba6-48c7-9c41-e00c21dfb085-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.565638 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svg9c\" (UniqueName: \"kubernetes.io/projected/09ce242f-6ba6-48c7-9c41-e00c21dfb085-kube-api-access-svg9c\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.566567 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0e4700f-f6cc-44bb-93b6-a58f1e74b0df" (UID: "a0e4700f-f6cc-44bb-93b6-a58f1e74b0df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.585509 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-kube-api-access-xww8c" (OuterVolumeSpecName: "kube-api-access-xww8c") pod "a0e4700f-f6cc-44bb-93b6-a58f1e74b0df" (UID: "a0e4700f-f6cc-44bb-93b6-a58f1e74b0df"). InnerVolumeSpecName "kube-api-access-xww8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.666935 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7fgn\" (UniqueName: \"kubernetes.io/projected/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-kube-api-access-j7fgn\") pod \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\" (UID: \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\") " Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.667027 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-operator-scripts\") pod \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\" (UID: \"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee\") " Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.667083 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kb7f\" (UniqueName: \"kubernetes.io/projected/47a4b26d-e794-43fa-991d-55679de18394-kube-api-access-5kb7f\") pod \"47a4b26d-e794-43fa-991d-55679de18394\" (UID: \"47a4b26d-e794-43fa-991d-55679de18394\") " Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.667116 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a4b26d-e794-43fa-991d-55679de18394-operator-scripts\") pod \"47a4b26d-e794-43fa-991d-55679de18394\" (UID: \"47a4b26d-e794-43fa-991d-55679de18394\") " Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.667547 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xww8c\" (UniqueName: \"kubernetes.io/projected/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-kube-api-access-xww8c\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.667562 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.668880 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a4b26d-e794-43fa-991d-55679de18394-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47a4b26d-e794-43fa-991d-55679de18394" (UID: "47a4b26d-e794-43fa-991d-55679de18394"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.670192 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2dde6f07-ea2d-40ba-9a07-12fcc461a0ee" (UID: "2dde6f07-ea2d-40ba-9a07-12fcc461a0ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.673732 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a4b26d-e794-43fa-991d-55679de18394-kube-api-access-5kb7f" (OuterVolumeSpecName: "kube-api-access-5kb7f") pod "47a4b26d-e794-43fa-991d-55679de18394" (UID: "47a4b26d-e794-43fa-991d-55679de18394"). InnerVolumeSpecName "kube-api-access-5kb7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.673800 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-kube-api-access-j7fgn" (OuterVolumeSpecName: "kube-api-access-j7fgn") pod "2dde6f07-ea2d-40ba-9a07-12fcc461a0ee" (UID: "2dde6f07-ea2d-40ba-9a07-12fcc461a0ee"). InnerVolumeSpecName "kube-api-access-j7fgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.769670 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7fgn\" (UniqueName: \"kubernetes.io/projected/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-kube-api-access-j7fgn\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.769706 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.769717 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kb7f\" (UniqueName: \"kubernetes.io/projected/47a4b26d-e794-43fa-991d-55679de18394-kube-api-access-5kb7f\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.769726 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a4b26d-e794-43fa-991d-55679de18394-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.885967 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-6wrh6"] Feb 28 13:37:05 crc kubenswrapper[4897]: E0228 13:37:05.886483 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a4b26d-e794-43fa-991d-55679de18394" containerName="mariadb-account-create-update" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.886506 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a4b26d-e794-43fa-991d-55679de18394" containerName="mariadb-account-create-update" Feb 28 13:37:05 crc kubenswrapper[4897]: E0228 13:37:05.886548 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e4700f-f6cc-44bb-93b6-a58f1e74b0df" containerName="mariadb-database-create" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.886558 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e4700f-f6cc-44bb-93b6-a58f1e74b0df" containerName="mariadb-database-create" Feb 28 13:37:05 crc kubenswrapper[4897]: E0228 13:37:05.886578 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dde6f07-ea2d-40ba-9a07-12fcc461a0ee" containerName="mariadb-database-create" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.886586 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dde6f07-ea2d-40ba-9a07-12fcc461a0ee" containerName="mariadb-database-create" Feb 28 13:37:05 crc kubenswrapper[4897]: E0228 13:37:05.886606 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ce242f-6ba6-48c7-9c41-e00c21dfb085" containerName="mariadb-account-create-update" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.886616 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ce242f-6ba6-48c7-9c41-e00c21dfb085" containerName="mariadb-account-create-update" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.886838 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dde6f07-ea2d-40ba-9a07-12fcc461a0ee" containerName="mariadb-database-create" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.886865 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e4700f-f6cc-44bb-93b6-a58f1e74b0df" containerName="mariadb-database-create" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.886878 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="09ce242f-6ba6-48c7-9c41-e00c21dfb085" containerName="mariadb-account-create-update" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.886894 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a4b26d-e794-43fa-991d-55679de18394" containerName="mariadb-account-create-update" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.887631 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.899211 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6wrh6"] Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.929075 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bf46d42-2d7e-410d-8a74-1ce12bb280b2","Type":"ContainerStarted","Data":"5a48d58771b6ebaaefeaac2908a8631795907b9109de8da771b2087fa08dc7a5"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.929880 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.936904 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"3912e980974107f87ce171584b080479e52fe0991420d8aba0e5c792c477f23b"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.936950 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"f3f13834d792024e6d089342ea5a3cfe46105320c62b7dcba0e370f820973aca"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.936965 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"ab1cdc8b0d1f12f70eefeb1d0c83f3e7619fd1c3a208812f4425ad80945d6730"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.936977 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"e996b2cdd118946e70f89ff75ac65e7e137706e092a1e8a3ea0d796580820f61"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.938677 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7ndv6" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.938664 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7ndv6" event={"ID":"2dde6f07-ea2d-40ba-9a07-12fcc461a0ee","Type":"ContainerDied","Data":"d39c85695f56fb914f4c34d3d1d208031dce37ad7f3ffaa5b0665889f5e305fc"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.938819 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39c85695f56fb914f4c34d3d1d208031dce37ad7f3ffaa5b0665889f5e305fc" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.939849 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7a2e-account-create-update-dsfp4" event={"ID":"47a4b26d-e794-43fa-991d-55679de18394","Type":"ContainerDied","Data":"f530ac606342ce77756d4c4c838915277372a51a002fe110cc10f4697abfd0c7"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.939871 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f530ac606342ce77756d4c4c838915277372a51a002fe110cc10f4697abfd0c7" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.939927 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7a2e-account-create-update-dsfp4" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.943066 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zqjtm" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.943063 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zqjtm" event={"ID":"a0e4700f-f6cc-44bb-93b6-a58f1e74b0df","Type":"ContainerDied","Data":"433c605598bdef4de84348ec26a435d75c2f5c71b49b0492a436c65271e2420f"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.943183 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433c605598bdef4de84348ec26a435d75c2f5c71b49b0492a436c65271e2420f" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.944965 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bfa7-account-create-update-8mjtn" event={"ID":"09ce242f-6ba6-48c7-9c41-e00c21dfb085","Type":"ContainerDied","Data":"66e930645a5b8c66084b4b0102efa390966c875691d7e5578572a906b370dbdb"} Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.945021 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bfa7-account-create-update-8mjtn" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.945025 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e930645a5b8c66084b4b0102efa390966c875691d7e5578572a906b370dbdb" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.978906 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4546a11d-5dfc-4055-9b4e-56838508d1fe-operator-scripts\") pod \"glance-db-create-6wrh6\" (UID: \"4546a11d-5dfc-4055-9b4e-56838508d1fe\") " pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:05 crc kubenswrapper[4897]: I0228 13:37:05.979012 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w6s6\" (UniqueName: \"kubernetes.io/projected/4546a11d-5dfc-4055-9b4e-56838508d1fe-kube-api-access-7w6s6\") pod \"glance-db-create-6wrh6\" (UID: \"4546a11d-5dfc-4055-9b4e-56838508d1fe\") " pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.008278 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.953303594 podStartE2EDuration="1m11.008258528s" podCreationTimestamp="2026-02-28 13:35:55 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.159466639 +0000 UTC m=+1188.401787296" lastFinishedPulling="2026-02-28 13:36:29.214421563 +0000 UTC m=+1203.456742230" observedRunningTime="2026-02-28 13:37:05.988020903 +0000 UTC m=+1240.230341560" watchObservedRunningTime="2026-02-28 13:37:06.008258528 +0000 UTC m=+1240.250579185" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.031195 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d97d-account-create-update-7w7s6"] Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.032694 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.040341 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.041955 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d97d-account-create-update-7w7s6"] Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.080177 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4546a11d-5dfc-4055-9b4e-56838508d1fe-operator-scripts\") pod \"glance-db-create-6wrh6\" (UID: \"4546a11d-5dfc-4055-9b4e-56838508d1fe\") " pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.081062 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4546a11d-5dfc-4055-9b4e-56838508d1fe-operator-scripts\") pod \"glance-db-create-6wrh6\" (UID: \"4546a11d-5dfc-4055-9b4e-56838508d1fe\") " pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.081275 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w6s6\" (UniqueName: \"kubernetes.io/projected/4546a11d-5dfc-4055-9b4e-56838508d1fe-kube-api-access-7w6s6\") pod \"glance-db-create-6wrh6\" (UID: \"4546a11d-5dfc-4055-9b4e-56838508d1fe\") " pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.101945 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w6s6\" (UniqueName: \"kubernetes.io/projected/4546a11d-5dfc-4055-9b4e-56838508d1fe-kube-api-access-7w6s6\") pod \"glance-db-create-6wrh6\" (UID: \"4546a11d-5dfc-4055-9b4e-56838508d1fe\") " pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.183064 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d4936d-bd8d-426b-9799-ac02f672fe1a-operator-scripts\") pod \"glance-d97d-account-create-update-7w7s6\" (UID: \"82d4936d-bd8d-426b-9799-ac02f672fe1a\") " pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.183618 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ng6\" (UniqueName: \"kubernetes.io/projected/82d4936d-bd8d-426b-9799-ac02f672fe1a-kube-api-access-b6ng6\") pod \"glance-d97d-account-create-update-7w7s6\" (UID: \"82d4936d-bd8d-426b-9799-ac02f672fe1a\") " pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.209773 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.286742 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ng6\" (UniqueName: \"kubernetes.io/projected/82d4936d-bd8d-426b-9799-ac02f672fe1a-kube-api-access-b6ng6\") pod \"glance-d97d-account-create-update-7w7s6\" (UID: \"82d4936d-bd8d-426b-9799-ac02f672fe1a\") " pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.286867 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d4936d-bd8d-426b-9799-ac02f672fe1a-operator-scripts\") pod \"glance-d97d-account-create-update-7w7s6\" (UID: \"82d4936d-bd8d-426b-9799-ac02f672fe1a\") " pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.287700 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d4936d-bd8d-426b-9799-ac02f672fe1a-operator-scripts\") pod \"glance-d97d-account-create-update-7w7s6\" (UID: \"82d4936d-bd8d-426b-9799-ac02f672fe1a\") " pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.318463 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ng6\" (UniqueName: \"kubernetes.io/projected/82d4936d-bd8d-426b-9799-ac02f672fe1a-kube-api-access-b6ng6\") pod \"glance-d97d-account-create-update-7w7s6\" (UID: \"82d4936d-bd8d-426b-9799-ac02f672fe1a\") " pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.458561 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.477557 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.534422 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.592949 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa284414-c19d-466b-a36e-6873e0e3c200-operator-scripts\") pod \"fa284414-c19d-466b-a36e-6873e0e3c200\" (UID: \"fa284414-c19d-466b-a36e-6873e0e3c200\") " Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.593056 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f47bf46-033d-4191-a8a5-45ba3bc854e4-operator-scripts\") pod \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\" (UID: \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\") " Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.593180 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhkm4\" (UniqueName: \"kubernetes.io/projected/fa284414-c19d-466b-a36e-6873e0e3c200-kube-api-access-mhkm4\") pod \"fa284414-c19d-466b-a36e-6873e0e3c200\" (UID: \"fa284414-c19d-466b-a36e-6873e0e3c200\") " Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.593217 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvpjc\" (UniqueName: \"kubernetes.io/projected/2f47bf46-033d-4191-a8a5-45ba3bc854e4-kube-api-access-vvpjc\") pod \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\" (UID: \"2f47bf46-033d-4191-a8a5-45ba3bc854e4\") " Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.593694 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f47bf46-033d-4191-a8a5-45ba3bc854e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f47bf46-033d-4191-a8a5-45ba3bc854e4" (UID: "2f47bf46-033d-4191-a8a5-45ba3bc854e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.593790 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f47bf46-033d-4191-a8a5-45ba3bc854e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.596389 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa284414-c19d-466b-a36e-6873e0e3c200-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa284414-c19d-466b-a36e-6873e0e3c200" (UID: "fa284414-c19d-466b-a36e-6873e0e3c200"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.604429 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa284414-c19d-466b-a36e-6873e0e3c200-kube-api-access-mhkm4" (OuterVolumeSpecName: "kube-api-access-mhkm4") pod "fa284414-c19d-466b-a36e-6873e0e3c200" (UID: "fa284414-c19d-466b-a36e-6873e0e3c200"). InnerVolumeSpecName "kube-api-access-mhkm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.604487 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f47bf46-033d-4191-a8a5-45ba3bc854e4-kube-api-access-vvpjc" (OuterVolumeSpecName: "kube-api-access-vvpjc") pod "2f47bf46-033d-4191-a8a5-45ba3bc854e4" (UID: "2f47bf46-033d-4191-a8a5-45ba3bc854e4"). InnerVolumeSpecName "kube-api-access-vvpjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.695374 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa284414-c19d-466b-a36e-6873e0e3c200-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.695571 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhkm4\" (UniqueName: \"kubernetes.io/projected/fa284414-c19d-466b-a36e-6873e0e3c200-kube-api-access-mhkm4\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.695583 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvpjc\" (UniqueName: \"kubernetes.io/projected/2f47bf46-033d-4191-a8a5-45ba3bc854e4-kube-api-access-vvpjc\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.768410 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6wrh6"] Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.952438 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-sdxh6" event={"ID":"2f47bf46-033d-4191-a8a5-45ba3bc854e4","Type":"ContainerDied","Data":"5b2e84f888157546a6f903a78c4d21277aef8782735b8f03a4d2f91e872353c7"} Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.952480 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-sdxh6" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.952481 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b2e84f888157546a6f903a78c4d21277aef8782735b8f03a4d2f91e872353c7" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.953897 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-90a5-account-create-update-6bzw4" event={"ID":"fa284414-c19d-466b-a36e-6873e0e3c200","Type":"ContainerDied","Data":"3cbb8d46e6bf58e3c7b5eb7e1d766a02c3cccde7e1cdf8a1f9e02f06cc5c09e4"} Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.953927 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cbb8d46e6bf58e3c7b5eb7e1d766a02c3cccde7e1cdf8a1f9e02f06cc5c09e4" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.953984 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-90a5-account-create-update-6bzw4" Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.961433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6wrh6" event={"ID":"4546a11d-5dfc-4055-9b4e-56838508d1fe","Type":"ContainerStarted","Data":"274c33b5a9c838c934dc6102615a32c67ab5aacf47b191840e62573a571f7cb9"} Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.961465 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6wrh6" event={"ID":"4546a11d-5dfc-4055-9b4e-56838508d1fe","Type":"ContainerStarted","Data":"7dc133793a295edfa9a3eacd7975affd8b499361d8982d0f535262f398613189"} Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.977710 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"05f4838e1937e333024bd2703c88b8f28355f97e69c1260abece607601e77208"} Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.977747 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"8791114b9ca566fde06e1d3df8dfe5b80c64d032f64492e19eb216cea3544e70"} Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.977760 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e07793a7-3e98-4a8d-bfb6-3c630f07d391","Type":"ContainerStarted","Data":"c56fde837402678243d13d439962c032a97b039880d4972c2a731f952d12e9b0"} Feb 28 13:37:06 crc kubenswrapper[4897]: I0228 13:37:06.981993 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-6wrh6" podStartSLOduration=1.981975506 podStartE2EDuration="1.981975506s" podCreationTimestamp="2026-02-28 13:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:37:06.979816785 +0000 UTC m=+1241.222137452" watchObservedRunningTime="2026-02-28 13:37:06.981975506 +0000 UTC m=+1241.224296163" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.036892 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.653834641 podStartE2EDuration="25.036862736s" podCreationTimestamp="2026-02-28 13:36:42 +0000 UTC" firstStartedPulling="2026-02-28 13:37:00.630468674 +0000 UTC m=+1234.872789332" lastFinishedPulling="2026-02-28 13:37:05.01349677 +0000 UTC m=+1239.255817427" observedRunningTime="2026-02-28 13:37:07.025424201 +0000 UTC m=+1241.267744858" watchObservedRunningTime="2026-02-28 13:37:07.036862736 +0000 UTC m=+1241.279183393" Feb 28 13:37:07 crc kubenswrapper[4897]: W0228 13:37:07.050595 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82d4936d_bd8d_426b_9799_ac02f672fe1a.slice/crio-775696cea62812723978b397d355d94d4526ffeec6246aadd9ba9b60d12dadfa WatchSource:0}: Error finding container 775696cea62812723978b397d355d94d4526ffeec6246aadd9ba9b60d12dadfa: Status 404 returned error can't find the container with id 775696cea62812723978b397d355d94d4526ffeec6246aadd9ba9b60d12dadfa Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.073358 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d97d-account-create-update-7w7s6"] Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.379416 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.420832 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57979d558f-jkqtc"] Feb 28 13:37:07 crc kubenswrapper[4897]: E0228 13:37:07.421153 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f47bf46-033d-4191-a8a5-45ba3bc854e4" containerName="mariadb-database-create" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.421168 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f47bf46-033d-4191-a8a5-45ba3bc854e4" containerName="mariadb-database-create" Feb 28 13:37:07 crc kubenswrapper[4897]: E0228 13:37:07.421189 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa284414-c19d-466b-a36e-6873e0e3c200" containerName="mariadb-account-create-update" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.421194 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa284414-c19d-466b-a36e-6873e0e3c200" containerName="mariadb-account-create-update" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.421366 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f47bf46-033d-4191-a8a5-45ba3bc854e4" containerName="mariadb-database-create" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.421387 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa284414-c19d-466b-a36e-6873e0e3c200" containerName="mariadb-account-create-update" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.422150 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.424026 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.478611 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57979d558f-jkqtc"] Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.606797 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-config\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.606855 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-sb\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.606937 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-svc\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.606965 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-swift-storage-0\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.606996 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-nb\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.607018 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cnxz\" (UniqueName: \"kubernetes.io/projected/4fdf7502-e691-4668-86f9-256befb8cb69-kube-api-access-6cnxz\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.608881 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8b4v9"] Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.610309 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.612878 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.617750 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8b4v9"] Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.708417 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-config\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.708470 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-sb\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.708519 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zszz\" (UniqueName: \"kubernetes.io/projected/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-kube-api-access-6zszz\") pod \"root-account-create-update-8b4v9\" (UID: \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\") " pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.708565 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-svc\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.708590 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-swift-storage-0\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.708621 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-nb\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.708638 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-operator-scripts\") pod \"root-account-create-update-8b4v9\" (UID: \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\") " pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.708660 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cnxz\" (UniqueName: \"kubernetes.io/projected/4fdf7502-e691-4668-86f9-256befb8cb69-kube-api-access-6cnxz\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.709278 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-sb\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.709400 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-svc\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.709735 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-swift-storage-0\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.709841 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-nb\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.709856 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-config\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.731130 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cnxz\" (UniqueName: \"kubernetes.io/projected/4fdf7502-e691-4668-86f9-256befb8cb69-kube-api-access-6cnxz\") pod \"dnsmasq-dns-57979d558f-jkqtc\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.776338 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.809706 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-operator-scripts\") pod \"root-account-create-update-8b4v9\" (UID: \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\") " pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.809875 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zszz\" (UniqueName: \"kubernetes.io/projected/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-kube-api-access-6zszz\") pod \"root-account-create-update-8b4v9\" (UID: \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\") " pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.811354 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-operator-scripts\") pod \"root-account-create-update-8b4v9\" (UID: \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\") " pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.831615 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zszz\" (UniqueName: \"kubernetes.io/projected/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-kube-api-access-6zszz\") pod \"root-account-create-update-8b4v9\" (UID: \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\") " pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.926859 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.991686 4897 generic.go:334] "Generic (PLEG): container finished" podID="4546a11d-5dfc-4055-9b4e-56838508d1fe" containerID="274c33b5a9c838c934dc6102615a32c67ab5aacf47b191840e62573a571f7cb9" exitCode=0 Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.991782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6wrh6" event={"ID":"4546a11d-5dfc-4055-9b4e-56838508d1fe","Type":"ContainerDied","Data":"274c33b5a9c838c934dc6102615a32c67ab5aacf47b191840e62573a571f7cb9"} Feb 28 13:37:07 crc kubenswrapper[4897]: I0228 13:37:07.998434 4897 generic.go:334] "Generic (PLEG): container finished" podID="82d4936d-bd8d-426b-9799-ac02f672fe1a" containerID="6bfed7904117d7ea1ca961d5ee28a4cbd4c6444fce1dfac7b34f745cbde857d6" exitCode=0 Feb 28 13:37:08 crc kubenswrapper[4897]: I0228 13:37:07.999857 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d97d-account-create-update-7w7s6" event={"ID":"82d4936d-bd8d-426b-9799-ac02f672fe1a","Type":"ContainerDied","Data":"6bfed7904117d7ea1ca961d5ee28a4cbd4c6444fce1dfac7b34f745cbde857d6"} Feb 28 13:37:08 crc kubenswrapper[4897]: I0228 13:37:07.999881 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d97d-account-create-update-7w7s6" event={"ID":"82d4936d-bd8d-426b-9799-ac02f672fe1a","Type":"ContainerStarted","Data":"775696cea62812723978b397d355d94d4526ffeec6246aadd9ba9b60d12dadfa"} Feb 28 13:37:08 crc kubenswrapper[4897]: W0228 13:37:08.198355 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fdf7502_e691_4668_86f9_256befb8cb69.slice/crio-d83b2808281c5d5dc6dc1459ca9ec92511a575b12931bdfbb7a2f8146da3d0d1 WatchSource:0}: Error finding container d83b2808281c5d5dc6dc1459ca9ec92511a575b12931bdfbb7a2f8146da3d0d1: Status 404 returned error can't find the container with id d83b2808281c5d5dc6dc1459ca9ec92511a575b12931bdfbb7a2f8146da3d0d1 Feb 28 13:37:08 crc kubenswrapper[4897]: I0228 13:37:08.203659 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57979d558f-jkqtc"] Feb 28 13:37:08 crc kubenswrapper[4897]: I0228 13:37:08.394812 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8b4v9"] Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.006466 4897 generic.go:334] "Generic (PLEG): container finished" podID="2f5e97ec-af2a-49c5-a1bf-4a294df9e97a" containerID="0cf5b56e8358f322115ad41c5d78dfa52a9b2968416efd9dec0619ec14632f49" exitCode=0 Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.006793 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8b4v9" event={"ID":"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a","Type":"ContainerDied","Data":"0cf5b56e8358f322115ad41c5d78dfa52a9b2968416efd9dec0619ec14632f49"} Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.006821 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8b4v9" event={"ID":"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a","Type":"ContainerStarted","Data":"17c830d25ecb8245d9ca995dd592a1c6a97d314fb08ac2e63251153a2860e4c7"} Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.008243 4897 generic.go:334] "Generic (PLEG): container finished" podID="4fdf7502-e691-4668-86f9-256befb8cb69" containerID="093bd17ca18770d2b42652028edd6527220331195fb0e08323a644604060b549" exitCode=0 Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.008285 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" event={"ID":"4fdf7502-e691-4668-86f9-256befb8cb69","Type":"ContainerDied","Data":"093bd17ca18770d2b42652028edd6527220331195fb0e08323a644604060b549"} Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.008375 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" event={"ID":"4fdf7502-e691-4668-86f9-256befb8cb69","Type":"ContainerStarted","Data":"d83b2808281c5d5dc6dc1459ca9ec92511a575b12931bdfbb7a2f8146da3d0d1"} Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.394765 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.410099 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.537424 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w6s6\" (UniqueName: \"kubernetes.io/projected/4546a11d-5dfc-4055-9b4e-56838508d1fe-kube-api-access-7w6s6\") pod \"4546a11d-5dfc-4055-9b4e-56838508d1fe\" (UID: \"4546a11d-5dfc-4055-9b4e-56838508d1fe\") " Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.537541 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4546a11d-5dfc-4055-9b4e-56838508d1fe-operator-scripts\") pod \"4546a11d-5dfc-4055-9b4e-56838508d1fe\" (UID: \"4546a11d-5dfc-4055-9b4e-56838508d1fe\") " Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.537624 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6ng6\" (UniqueName: \"kubernetes.io/projected/82d4936d-bd8d-426b-9799-ac02f672fe1a-kube-api-access-b6ng6\") pod \"82d4936d-bd8d-426b-9799-ac02f672fe1a\" (UID: \"82d4936d-bd8d-426b-9799-ac02f672fe1a\") " Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.537648 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d4936d-bd8d-426b-9799-ac02f672fe1a-operator-scripts\") pod \"82d4936d-bd8d-426b-9799-ac02f672fe1a\" (UID: \"82d4936d-bd8d-426b-9799-ac02f672fe1a\") " Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.538693 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4546a11d-5dfc-4055-9b4e-56838508d1fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4546a11d-5dfc-4055-9b4e-56838508d1fe" (UID: "4546a11d-5dfc-4055-9b4e-56838508d1fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.538712 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d4936d-bd8d-426b-9799-ac02f672fe1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82d4936d-bd8d-426b-9799-ac02f672fe1a" (UID: "82d4936d-bd8d-426b-9799-ac02f672fe1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.542499 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d4936d-bd8d-426b-9799-ac02f672fe1a-kube-api-access-b6ng6" (OuterVolumeSpecName: "kube-api-access-b6ng6") pod "82d4936d-bd8d-426b-9799-ac02f672fe1a" (UID: "82d4936d-bd8d-426b-9799-ac02f672fe1a"). InnerVolumeSpecName "kube-api-access-b6ng6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.542653 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4546a11d-5dfc-4055-9b4e-56838508d1fe-kube-api-access-7w6s6" (OuterVolumeSpecName: "kube-api-access-7w6s6") pod "4546a11d-5dfc-4055-9b4e-56838508d1fe" (UID: "4546a11d-5dfc-4055-9b4e-56838508d1fe"). InnerVolumeSpecName "kube-api-access-7w6s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.639223 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6ng6\" (UniqueName: \"kubernetes.io/projected/82d4936d-bd8d-426b-9799-ac02f672fe1a-kube-api-access-b6ng6\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.639267 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82d4936d-bd8d-426b-9799-ac02f672fe1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.639283 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w6s6\" (UniqueName: \"kubernetes.io/projected/4546a11d-5dfc-4055-9b4e-56838508d1fe-kube-api-access-7w6s6\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:09 crc kubenswrapper[4897]: I0228 13:37:09.639295 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4546a11d-5dfc-4055-9b4e-56838508d1fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.041415 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6wrh6" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.041428 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6wrh6" event={"ID":"4546a11d-5dfc-4055-9b4e-56838508d1fe","Type":"ContainerDied","Data":"7dc133793a295edfa9a3eacd7975affd8b499361d8982d0f535262f398613189"} Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.041516 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dc133793a295edfa9a3eacd7975affd8b499361d8982d0f535262f398613189" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.046700 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" event={"ID":"4fdf7502-e691-4668-86f9-256befb8cb69","Type":"ContainerStarted","Data":"d349c1a341a6d70e2d26d824328e437b41aa4a3630dbca9b47f87c1b38868b4a"} Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.046946 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.048296 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d97d-account-create-update-7w7s6" event={"ID":"82d4936d-bd8d-426b-9799-ac02f672fe1a","Type":"ContainerDied","Data":"775696cea62812723978b397d355d94d4526ffeec6246aadd9ba9b60d12dadfa"} Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.048390 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="775696cea62812723978b397d355d94d4526ffeec6246aadd9ba9b60d12dadfa" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.048493 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d97d-account-create-update-7w7s6" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.080284 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" podStartSLOduration=3.080262208 podStartE2EDuration="3.080262208s" podCreationTimestamp="2026-02-28 13:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:37:10.074788852 +0000 UTC m=+1244.317109559" watchObservedRunningTime="2026-02-28 13:37:10.080262208 +0000 UTC m=+1244.322582875" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.446243 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.551623 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zszz\" (UniqueName: \"kubernetes.io/projected/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-kube-api-access-6zszz\") pod \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\" (UID: \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\") " Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.552036 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-operator-scripts\") pod \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\" (UID: \"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a\") " Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.552543 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f5e97ec-af2a-49c5-a1bf-4a294df9e97a" (UID: "2f5e97ec-af2a-49c5-a1bf-4a294df9e97a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.557049 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-kube-api-access-6zszz" (OuterVolumeSpecName: "kube-api-access-6zszz") pod "2f5e97ec-af2a-49c5-a1bf-4a294df9e97a" (UID: "2f5e97ec-af2a-49c5-a1bf-4a294df9e97a"). InnerVolumeSpecName "kube-api-access-6zszz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.654944 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:10 crc kubenswrapper[4897]: I0228 13:37:10.655004 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zszz\" (UniqueName: \"kubernetes.io/projected/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a-kube-api-access-6zszz\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.061379 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8b4v9" event={"ID":"2f5e97ec-af2a-49c5-a1bf-4a294df9e97a","Type":"ContainerDied","Data":"17c830d25ecb8245d9ca995dd592a1c6a97d314fb08ac2e63251153a2860e4c7"} Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.061432 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c830d25ecb8245d9ca995dd592a1c6a97d314fb08ac2e63251153a2860e4c7" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.061431 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8b4v9" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.250860 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tklpd"] Feb 28 13:37:11 crc kubenswrapper[4897]: E0228 13:37:11.252362 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4546a11d-5dfc-4055-9b4e-56838508d1fe" containerName="mariadb-database-create" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.252500 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4546a11d-5dfc-4055-9b4e-56838508d1fe" containerName="mariadb-database-create" Feb 28 13:37:11 crc kubenswrapper[4897]: E0228 13:37:11.252598 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d4936d-bd8d-426b-9799-ac02f672fe1a" containerName="mariadb-account-create-update" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.252673 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d4936d-bd8d-426b-9799-ac02f672fe1a" containerName="mariadb-account-create-update" Feb 28 13:37:11 crc kubenswrapper[4897]: E0228 13:37:11.252762 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5e97ec-af2a-49c5-a1bf-4a294df9e97a" containerName="mariadb-account-create-update" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.252845 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5e97ec-af2a-49c5-a1bf-4a294df9e97a" containerName="mariadb-account-create-update" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.253242 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d4936d-bd8d-426b-9799-ac02f672fe1a" containerName="mariadb-account-create-update" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.253378 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5e97ec-af2a-49c5-a1bf-4a294df9e97a" containerName="mariadb-account-create-update" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.253483 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4546a11d-5dfc-4055-9b4e-56838508d1fe" containerName="mariadb-database-create" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.256096 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.259331 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jzxcb" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.260631 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.281955 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tklpd"] Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.366675 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-db-sync-config-data\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.366756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpddd\" (UniqueName: \"kubernetes.io/projected/d754bb18-6ebe-445e-8826-53d247030dc7-kube-api-access-cpddd\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.366826 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-config-data\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.367029 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-combined-ca-bundle\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.468756 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-db-sync-config-data\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.468813 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpddd\" (UniqueName: \"kubernetes.io/projected/d754bb18-6ebe-445e-8826-53d247030dc7-kube-api-access-cpddd\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.468871 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-config-data\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.468937 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-combined-ca-bundle\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.473379 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-db-sync-config-data\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.473750 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-combined-ca-bundle\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.474005 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-config-data\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.507579 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpddd\" (UniqueName: \"kubernetes.io/projected/d754bb18-6ebe-445e-8826-53d247030dc7-kube-api-access-cpddd\") pod \"glance-db-sync-tklpd\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.547829 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jsdwb" podUID="cd2fa5a5-caab-4d3d-8324-f6107d50f59f" containerName="ovn-controller" probeResult="failure" output=< Feb 28 13:37:11 crc kubenswrapper[4897]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 28 13:37:11 crc kubenswrapper[4897]: > Feb 28 13:37:11 crc kubenswrapper[4897]: I0228 13:37:11.570685 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:12 crc kubenswrapper[4897]: I0228 13:37:12.127287 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tklpd"] Feb 28 13:37:13 crc kubenswrapper[4897]: I0228 13:37:13.077349 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tklpd" event={"ID":"d754bb18-6ebe-445e-8826-53d247030dc7","Type":"ContainerStarted","Data":"0bf4cf72d83c5f1ba542a9323456fde2bf0a760dd6bd8ceb7e91ce1e45ce31a8"} Feb 28 13:37:13 crc kubenswrapper[4897]: E0228 13:37:13.458415 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:37:13 crc kubenswrapper[4897]: E0228 13:37:13.458501 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" Feb 28 13:37:14 crc kubenswrapper[4897]: I0228 13:37:14.113799 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8b4v9"] Feb 28 13:37:14 crc kubenswrapper[4897]: I0228 13:37:14.120088 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8b4v9"] Feb 28 13:37:14 crc kubenswrapper[4897]: I0228 13:37:14.471241 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5e97ec-af2a-49c5-a1bf-4a294df9e97a" path="/var/lib/kubelet/pods/2f5e97ec-af2a-49c5-a1bf-4a294df9e97a/volumes" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.233651 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.515652 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jsdwb" podUID="cd2fa5a5-caab-4d3d-8324-f6107d50f59f" containerName="ovn-controller" probeResult="failure" output=< Feb 28 13:37:16 crc kubenswrapper[4897]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 28 13:37:16 crc kubenswrapper[4897]: > Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.574426 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.580726 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ch9bl" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.807084 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jsdwb-config-tslfz"] Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.808783 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.823007 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jsdwb-config-tslfz"] Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.841646 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.843727 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.956350 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfr76\" (UniqueName: \"kubernetes.io/projected/37717215-0e60-4829-8c51-ea7d6efd985d-kube-api-access-xfr76\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.956627 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-log-ovn\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.956679 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.956709 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-additional-scripts\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.956743 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run-ovn\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:16 crc kubenswrapper[4897]: I0228 13:37:16.956777 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-scripts\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.058438 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.058498 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-additional-scripts\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.058537 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run-ovn\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.058573 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-scripts\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.058614 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfr76\" (UniqueName: \"kubernetes.io/projected/37717215-0e60-4829-8c51-ea7d6efd985d-kube-api-access-xfr76\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.058648 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-log-ovn\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.058881 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-log-ovn\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.058906 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.059233 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run-ovn\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.061407 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-scripts\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.061993 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-additional-scripts\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.080057 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfr76\" (UniqueName: \"kubernetes.io/projected/37717215-0e60-4829-8c51-ea7d6efd985d-kube-api-access-xfr76\") pod \"ovn-controller-jsdwb-config-tslfz\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.152491 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.165862 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.380706 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="48885530-3df1-42cf-9c7f-2f86a21026a9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.777456 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.852396 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-95sj7"] Feb 28 13:37:17 crc kubenswrapper[4897]: I0228 13:37:17.852892 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c566df67-95sj7" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerName="dnsmasq-dns" containerID="cri-o://15b08ece46d11ef4f9d673a24b8c7454d790c138022240d86638a7e9fc43e830" gracePeriod=10 Feb 28 13:37:18 crc kubenswrapper[4897]: I0228 13:37:18.126217 4897 generic.go:334] "Generic (PLEG): container finished" podID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerID="15b08ece46d11ef4f9d673a24b8c7454d790c138022240d86638a7e9fc43e830" exitCode=0 Feb 28 13:37:18 crc kubenswrapper[4897]: I0228 13:37:18.126256 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-95sj7" event={"ID":"8b08a755-562e-41ee-9591-eb9cb3fcb3c2","Type":"ContainerDied","Data":"15b08ece46d11ef4f9d673a24b8c7454d790c138022240d86638a7e9fc43e830"} Feb 28 13:37:18 crc kubenswrapper[4897]: I0228 13:37:18.161780 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c566df67-95sj7" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.138428 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5h5dn"] Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.139523 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.142252 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.149206 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5h5dn"] Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.297149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6d74\" (UniqueName: \"kubernetes.io/projected/9e9f9d43-6498-42ee-a72c-e88395991277-kube-api-access-f6d74\") pod \"root-account-create-update-5h5dn\" (UID: \"9e9f9d43-6498-42ee-a72c-e88395991277\") " pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.297273 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9d43-6498-42ee-a72c-e88395991277-operator-scripts\") pod \"root-account-create-update-5h5dn\" (UID: \"9e9f9d43-6498-42ee-a72c-e88395991277\") " pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.399040 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9d43-6498-42ee-a72c-e88395991277-operator-scripts\") pod \"root-account-create-update-5h5dn\" (UID: \"9e9f9d43-6498-42ee-a72c-e88395991277\") " pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.399179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6d74\" (UniqueName: \"kubernetes.io/projected/9e9f9d43-6498-42ee-a72c-e88395991277-kube-api-access-f6d74\") pod \"root-account-create-update-5h5dn\" (UID: \"9e9f9d43-6498-42ee-a72c-e88395991277\") " pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.400212 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9d43-6498-42ee-a72c-e88395991277-operator-scripts\") pod \"root-account-create-update-5h5dn\" (UID: \"9e9f9d43-6498-42ee-a72c-e88395991277\") " pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.441644 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6d74\" (UniqueName: \"kubernetes.io/projected/9e9f9d43-6498-42ee-a72c-e88395991277-kube-api-access-f6d74\") pod \"root-account-create-update-5h5dn\" (UID: \"9e9f9d43-6498-42ee-a72c-e88395991277\") " pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:19 crc kubenswrapper[4897]: I0228 13:37:19.506061 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:21 crc kubenswrapper[4897]: I0228 13:37:21.507232 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jsdwb" podUID="cd2fa5a5-caab-4d3d-8324-f6107d50f59f" containerName="ovn-controller" probeResult="failure" output=< Feb 28 13:37:21 crc kubenswrapper[4897]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 28 13:37:21 crc kubenswrapper[4897]: > Feb 28 13:37:23 crc kubenswrapper[4897]: I0228 13:37:23.162784 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c566df67-95sj7" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.119251 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.186698 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-95sj7" event={"ID":"8b08a755-562e-41ee-9591-eb9cb3fcb3c2","Type":"ContainerDied","Data":"68e32f883d3b6ce97d9aced65239f92e55762729c8eeff6d1463575648b0bf9e"} Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.187042 4897 scope.go:117] "RemoveContainer" containerID="15b08ece46d11ef4f9d673a24b8c7454d790c138022240d86638a7e9fc43e830" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.186819 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c566df67-95sj7" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.231528 4897 scope.go:117] "RemoveContainer" containerID="869f2f809289b61bb3c9f4d46b09243f2f07274542f71fcfbdcbfef9e1d0a516" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.248791 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5h5dn"] Feb 28 13:37:24 crc kubenswrapper[4897]: W0228 13:37:24.263255 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e9f9d43_6498_42ee_a72c_e88395991277.slice/crio-b4bfa8794867b1c6e118f1a8d44e6e4f5ca6a2c414b686f000e47563ef339565 WatchSource:0}: Error finding container b4bfa8794867b1c6e118f1a8d44e6e4f5ca6a2c414b686f000e47563ef339565: Status 404 returned error can't find the container with id b4bfa8794867b1c6e118f1a8d44e6e4f5ca6a2c414b686f000e47563ef339565 Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.291893 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-nb\") pod \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.291952 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-sb\") pod \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.292026 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-dns-svc\") pod \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.292137 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-config\") pod \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.292163 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq74v\" (UniqueName: \"kubernetes.io/projected/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-kube-api-access-bq74v\") pod \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\" (UID: \"8b08a755-562e-41ee-9591-eb9cb3fcb3c2\") " Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.296753 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-kube-api-access-bq74v" (OuterVolumeSpecName: "kube-api-access-bq74v") pod "8b08a755-562e-41ee-9591-eb9cb3fcb3c2" (UID: "8b08a755-562e-41ee-9591-eb9cb3fcb3c2"). InnerVolumeSpecName "kube-api-access-bq74v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.338996 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b08a755-562e-41ee-9591-eb9cb3fcb3c2" (UID: "8b08a755-562e-41ee-9591-eb9cb3fcb3c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.339003 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-config" (OuterVolumeSpecName: "config") pod "8b08a755-562e-41ee-9591-eb9cb3fcb3c2" (UID: "8b08a755-562e-41ee-9591-eb9cb3fcb3c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.346030 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b08a755-562e-41ee-9591-eb9cb3fcb3c2" (UID: "8b08a755-562e-41ee-9591-eb9cb3fcb3c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.359355 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b08a755-562e-41ee-9591-eb9cb3fcb3c2" (UID: "8b08a755-562e-41ee-9591-eb9cb3fcb3c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.364208 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jsdwb-config-tslfz"] Feb 28 13:37:24 crc kubenswrapper[4897]: W0228 13:37:24.370778 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37717215_0e60_4829_8c51_ea7d6efd985d.slice/crio-a7bbd966e1c8498487d0e7e55e46fd3dfd3b96ec611aa40974333521663861e9 WatchSource:0}: Error finding container a7bbd966e1c8498487d0e7e55e46fd3dfd3b96ec611aa40974333521663861e9: Status 404 returned error can't find the container with id a7bbd966e1c8498487d0e7e55e46fd3dfd3b96ec611aa40974333521663861e9 Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.394054 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.394251 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq74v\" (UniqueName: \"kubernetes.io/projected/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-kube-api-access-bq74v\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.394261 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.394269 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.394279 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b08a755-562e-41ee-9591-eb9cb3fcb3c2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:24 crc kubenswrapper[4897]: E0228 13:37:24.463716 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.513485 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-95sj7"] Feb 28 13:37:24 crc kubenswrapper[4897]: I0228 13:37:24.525104 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-95sj7"] Feb 28 13:37:25 crc kubenswrapper[4897]: I0228 13:37:25.205433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tklpd" event={"ID":"d754bb18-6ebe-445e-8826-53d247030dc7","Type":"ContainerStarted","Data":"054dce8b30edb292831f98a5cfee5d3dffb5788c55d5f1d717ac7e3b40882bdc"} Feb 28 13:37:25 crc kubenswrapper[4897]: I0228 13:37:25.211453 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5h5dn" event={"ID":"9e9f9d43-6498-42ee-a72c-e88395991277","Type":"ContainerDied","Data":"432bfebc64b1798bae6ab76386d8d01deaa11e46a4fcad025b4efebfacf11d97"} Feb 28 13:37:25 crc kubenswrapper[4897]: I0228 13:37:25.211510 4897 generic.go:334] "Generic (PLEG): container finished" podID="9e9f9d43-6498-42ee-a72c-e88395991277" containerID="432bfebc64b1798bae6ab76386d8d01deaa11e46a4fcad025b4efebfacf11d97" exitCode=0 Feb 28 13:37:25 crc kubenswrapper[4897]: I0228 13:37:25.211648 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5h5dn" event={"ID":"9e9f9d43-6498-42ee-a72c-e88395991277","Type":"ContainerStarted","Data":"b4bfa8794867b1c6e118f1a8d44e6e4f5ca6a2c414b686f000e47563ef339565"} Feb 28 13:37:25 crc kubenswrapper[4897]: I0228 13:37:25.221190 4897 generic.go:334] "Generic (PLEG): container finished" podID="37717215-0e60-4829-8c51-ea7d6efd985d" containerID="56c1675f0c4a6d9defb4014225da8424a6ebe483c3a906ece2fac996a4dc08e7" exitCode=0 Feb 28 13:37:25 crc kubenswrapper[4897]: I0228 13:37:25.221226 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsdwb-config-tslfz" event={"ID":"37717215-0e60-4829-8c51-ea7d6efd985d","Type":"ContainerDied","Data":"56c1675f0c4a6d9defb4014225da8424a6ebe483c3a906ece2fac996a4dc08e7"} Feb 28 13:37:25 crc kubenswrapper[4897]: I0228 13:37:25.221246 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsdwb-config-tslfz" event={"ID":"37717215-0e60-4829-8c51-ea7d6efd985d","Type":"ContainerStarted","Data":"a7bbd966e1c8498487d0e7e55e46fd3dfd3b96ec611aa40974333521663861e9"} Feb 28 13:37:25 crc kubenswrapper[4897]: I0228 13:37:25.237730 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tklpd" podStartSLOduration=2.434722234 podStartE2EDuration="14.237707333s" podCreationTimestamp="2026-02-28 13:37:11 +0000 UTC" firstStartedPulling="2026-02-28 13:37:12.132802232 +0000 UTC m=+1246.375122879" lastFinishedPulling="2026-02-28 13:37:23.935787321 +0000 UTC m=+1258.178107978" observedRunningTime="2026-02-28 13:37:25.226953297 +0000 UTC m=+1259.469274034" watchObservedRunningTime="2026-02-28 13:37:25.237707333 +0000 UTC m=+1259.480028010" Feb 28 13:37:26 crc kubenswrapper[4897]: E0228 13:37:26.470678 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.478835 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" path="/var/lib/kubelet/pods/8b08a755-562e-41ee-9591-eb9cb3fcb3c2/volumes" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.527734 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jsdwb" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.642889 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.653934 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735188 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run\") pod \"37717215-0e60-4829-8c51-ea7d6efd985d\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735272 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run" (OuterVolumeSpecName: "var-run") pod "37717215-0e60-4829-8c51-ea7d6efd985d" (UID: "37717215-0e60-4829-8c51-ea7d6efd985d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735339 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "37717215-0e60-4829-8c51-ea7d6efd985d" (UID: "37717215-0e60-4829-8c51-ea7d6efd985d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735291 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-log-ovn\") pod \"37717215-0e60-4829-8c51-ea7d6efd985d\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735411 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfr76\" (UniqueName: \"kubernetes.io/projected/37717215-0e60-4829-8c51-ea7d6efd985d-kube-api-access-xfr76\") pod \"37717215-0e60-4829-8c51-ea7d6efd985d\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735527 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-additional-scripts\") pod \"37717215-0e60-4829-8c51-ea7d6efd985d\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735572 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run-ovn\") pod \"37717215-0e60-4829-8c51-ea7d6efd985d\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735608 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9d43-6498-42ee-a72c-e88395991277-operator-scripts\") pod \"9e9f9d43-6498-42ee-a72c-e88395991277\" (UID: \"9e9f9d43-6498-42ee-a72c-e88395991277\") " Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735632 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6d74\" (UniqueName: \"kubernetes.io/projected/9e9f9d43-6498-42ee-a72c-e88395991277-kube-api-access-f6d74\") pod \"9e9f9d43-6498-42ee-a72c-e88395991277\" (UID: \"9e9f9d43-6498-42ee-a72c-e88395991277\") " Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735641 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "37717215-0e60-4829-8c51-ea7d6efd985d" (UID: "37717215-0e60-4829-8c51-ea7d6efd985d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.735656 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-scripts\") pod \"37717215-0e60-4829-8c51-ea7d6efd985d\" (UID: \"37717215-0e60-4829-8c51-ea7d6efd985d\") " Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.736013 4897 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.736037 4897 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-run\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.736048 4897 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/37717215-0e60-4829-8c51-ea7d6efd985d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.736148 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "37717215-0e60-4829-8c51-ea7d6efd985d" (UID: "37717215-0e60-4829-8c51-ea7d6efd985d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.736295 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9f9d43-6498-42ee-a72c-e88395991277-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e9f9d43-6498-42ee-a72c-e88395991277" (UID: "9e9f9d43-6498-42ee-a72c-e88395991277"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.736750 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-scripts" (OuterVolumeSpecName: "scripts") pod "37717215-0e60-4829-8c51-ea7d6efd985d" (UID: "37717215-0e60-4829-8c51-ea7d6efd985d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.740375 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37717215-0e60-4829-8c51-ea7d6efd985d-kube-api-access-xfr76" (OuterVolumeSpecName: "kube-api-access-xfr76") pod "37717215-0e60-4829-8c51-ea7d6efd985d" (UID: "37717215-0e60-4829-8c51-ea7d6efd985d"). InnerVolumeSpecName "kube-api-access-xfr76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.758094 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9f9d43-6498-42ee-a72c-e88395991277-kube-api-access-f6d74" (OuterVolumeSpecName: "kube-api-access-f6d74") pod "9e9f9d43-6498-42ee-a72c-e88395991277" (UID: "9e9f9d43-6498-42ee-a72c-e88395991277"). InnerVolumeSpecName "kube-api-access-f6d74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.830577 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.837899 4897 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.837930 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9d43-6498-42ee-a72c-e88395991277-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.837965 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6d74\" (UniqueName: \"kubernetes.io/projected/9e9f9d43-6498-42ee-a72c-e88395991277-kube-api-access-f6d74\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.837979 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37717215-0e60-4829-8c51-ea7d6efd985d-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:26 crc kubenswrapper[4897]: I0228 13:37:26.837992 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfr76\" (UniqueName: \"kubernetes.io/projected/37717215-0e60-4829-8c51-ea7d6efd985d-kube-api-access-xfr76\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.150597 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.244999 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5h5dn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.245101 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5h5dn" event={"ID":"9e9f9d43-6498-42ee-a72c-e88395991277","Type":"ContainerDied","Data":"b4bfa8794867b1c6e118f1a8d44e6e4f5ca6a2c414b686f000e47563ef339565"} Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.245122 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4bfa8794867b1c6e118f1a8d44e6e4f5ca6a2c414b686f000e47563ef339565" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.246917 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsdwb-config-tslfz" event={"ID":"37717215-0e60-4829-8c51-ea7d6efd985d","Type":"ContainerDied","Data":"a7bbd966e1c8498487d0e7e55e46fd3dfd3b96ec611aa40974333521663861e9"} Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.246965 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7bbd966e1c8498487d0e7e55e46fd3dfd3b96ec611aa40974333521663861e9" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.246935 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsdwb-config-tslfz" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.380516 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.412091 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xtnhx"] Feb 28 13:37:27 crc kubenswrapper[4897]: E0228 13:37:27.412441 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9f9d43-6498-42ee-a72c-e88395991277" containerName="mariadb-account-create-update" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.412457 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9f9d43-6498-42ee-a72c-e88395991277" containerName="mariadb-account-create-update" Feb 28 13:37:27 crc kubenswrapper[4897]: E0228 13:37:27.412483 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerName="init" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.412489 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerName="init" Feb 28 13:37:27 crc kubenswrapper[4897]: E0228 13:37:27.412508 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37717215-0e60-4829-8c51-ea7d6efd985d" containerName="ovn-config" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.412515 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="37717215-0e60-4829-8c51-ea7d6efd985d" containerName="ovn-config" Feb 28 13:37:27 crc kubenswrapper[4897]: E0228 13:37:27.412529 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerName="dnsmasq-dns" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.412534 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerName="dnsmasq-dns" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.412726 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9f9d43-6498-42ee-a72c-e88395991277" containerName="mariadb-account-create-update" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.412754 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="37717215-0e60-4829-8c51-ea7d6efd985d" containerName="ovn-config" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.412766 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b08a755-562e-41ee-9591-eb9cb3fcb3c2" containerName="dnsmasq-dns" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.413435 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.430051 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xtnhx"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.506331 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8ddqn"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.507331 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.509798 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.510538 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.510721 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxw9x" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.515656 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.522188 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8ddqn"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.554679 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d935-account-create-update-l5hzg"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.555235 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-config-data\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.555336 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55s57\" (UniqueName: \"kubernetes.io/projected/041df086-096c-4dd1-9e4e-d06a2051084c-kube-api-access-55s57\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.555416 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-combined-ca-bundle\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.555538 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn6x4\" (UniqueName: \"kubernetes.io/projected/f3f6c67d-0efe-493c-9e09-781291a958cd-kube-api-access-sn6x4\") pod \"barbican-db-create-xtnhx\" (UID: \"f3f6c67d-0efe-493c-9e09-781291a958cd\") " pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.555580 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f6c67d-0efe-493c-9e09-781291a958cd-operator-scripts\") pod \"barbican-db-create-xtnhx\" (UID: \"f3f6c67d-0efe-493c-9e09-781291a958cd\") " pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.557446 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.560337 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.565973 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d935-account-create-update-l5hzg"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.627475 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-5h967"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.628535 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5h967" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.651130 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5h967"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.658208 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn5nq\" (UniqueName: \"kubernetes.io/projected/979b68d5-45f7-4a2d-aae1-0e93d2de732e-kube-api-access-kn5nq\") pod \"cinder-d935-account-create-update-l5hzg\" (UID: \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\") " pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.658252 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-config-data\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.658281 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55s57\" (UniqueName: \"kubernetes.io/projected/041df086-096c-4dd1-9e4e-d06a2051084c-kube-api-access-55s57\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.658327 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-combined-ca-bundle\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.658350 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979b68d5-45f7-4a2d-aae1-0e93d2de732e-operator-scripts\") pod \"cinder-d935-account-create-update-l5hzg\" (UID: \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\") " pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.658396 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn6x4\" (UniqueName: \"kubernetes.io/projected/f3f6c67d-0efe-493c-9e09-781291a958cd-kube-api-access-sn6x4\") pod \"barbican-db-create-xtnhx\" (UID: \"f3f6c67d-0efe-493c-9e09-781291a958cd\") " pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.658419 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f6c67d-0efe-493c-9e09-781291a958cd-operator-scripts\") pod \"barbican-db-create-xtnhx\" (UID: \"f3f6c67d-0efe-493c-9e09-781291a958cd\") " pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.659127 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f6c67d-0efe-493c-9e09-781291a958cd-operator-scripts\") pod \"barbican-db-create-xtnhx\" (UID: \"f3f6c67d-0efe-493c-9e09-781291a958cd\") " pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.665030 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-combined-ca-bundle\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.672852 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-config-data\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.674250 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3c48-account-create-update-58hww"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.675421 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.677630 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.679079 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn6x4\" (UniqueName: \"kubernetes.io/projected/f3f6c67d-0efe-493c-9e09-781291a958cd-kube-api-access-sn6x4\") pod \"barbican-db-create-xtnhx\" (UID: \"f3f6c67d-0efe-493c-9e09-781291a958cd\") " pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.693978 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3c48-account-create-update-58hww"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.694942 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55s57\" (UniqueName: \"kubernetes.io/projected/041df086-096c-4dd1-9e4e-d06a2051084c-kube-api-access-55s57\") pod \"keystone-db-sync-8ddqn\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.729196 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.761723 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hq6s\" (UniqueName: \"kubernetes.io/projected/faa6dea0-3e57-4736-9a58-3885f7a30f18-kube-api-access-8hq6s\") pod \"barbican-3c48-account-create-update-58hww\" (UID: \"faa6dea0-3e57-4736-9a58-3885f7a30f18\") " pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.761784 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176faf0b-5f7d-450b-871e-9d5df2595562-operator-scripts\") pod \"cinder-db-create-5h967\" (UID: \"176faf0b-5f7d-450b-871e-9d5df2595562\") " pod="openstack/cinder-db-create-5h967" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.761829 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn5nq\" (UniqueName: \"kubernetes.io/projected/979b68d5-45f7-4a2d-aae1-0e93d2de732e-kube-api-access-kn5nq\") pod \"cinder-d935-account-create-update-l5hzg\" (UID: \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\") " pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.761875 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvgf\" (UniqueName: \"kubernetes.io/projected/176faf0b-5f7d-450b-871e-9d5df2595562-kube-api-access-5mvgf\") pod \"cinder-db-create-5h967\" (UID: \"176faf0b-5f7d-450b-871e-9d5df2595562\") " pod="openstack/cinder-db-create-5h967" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.761893 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa6dea0-3e57-4736-9a58-3885f7a30f18-operator-scripts\") pod \"barbican-3c48-account-create-update-58hww\" (UID: \"faa6dea0-3e57-4736-9a58-3885f7a30f18\") " pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.761928 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979b68d5-45f7-4a2d-aae1-0e93d2de732e-operator-scripts\") pod \"cinder-d935-account-create-update-l5hzg\" (UID: \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\") " pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.762589 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979b68d5-45f7-4a2d-aae1-0e93d2de732e-operator-scripts\") pod \"cinder-d935-account-create-update-l5hzg\" (UID: \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\") " pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.781121 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jsdwb-config-tslfz"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.791943 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn5nq\" (UniqueName: \"kubernetes.io/projected/979b68d5-45f7-4a2d-aae1-0e93d2de732e-kube-api-access-kn5nq\") pod \"cinder-d935-account-create-update-l5hzg\" (UID: \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\") " pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.800081 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jsdwb-config-tslfz"] Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.834845 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.863801 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hq6s\" (UniqueName: \"kubernetes.io/projected/faa6dea0-3e57-4736-9a58-3885f7a30f18-kube-api-access-8hq6s\") pod \"barbican-3c48-account-create-update-58hww\" (UID: \"faa6dea0-3e57-4736-9a58-3885f7a30f18\") " pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.863869 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176faf0b-5f7d-450b-871e-9d5df2595562-operator-scripts\") pod \"cinder-db-create-5h967\" (UID: \"176faf0b-5f7d-450b-871e-9d5df2595562\") " pod="openstack/cinder-db-create-5h967" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.863929 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mvgf\" (UniqueName: \"kubernetes.io/projected/176faf0b-5f7d-450b-871e-9d5df2595562-kube-api-access-5mvgf\") pod \"cinder-db-create-5h967\" (UID: \"176faf0b-5f7d-450b-871e-9d5df2595562\") " pod="openstack/cinder-db-create-5h967" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.863960 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa6dea0-3e57-4736-9a58-3885f7a30f18-operator-scripts\") pod \"barbican-3c48-account-create-update-58hww\" (UID: \"faa6dea0-3e57-4736-9a58-3885f7a30f18\") " pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.865043 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176faf0b-5f7d-450b-871e-9d5df2595562-operator-scripts\") pod \"cinder-db-create-5h967\" (UID: \"176faf0b-5f7d-450b-871e-9d5df2595562\") " pod="openstack/cinder-db-create-5h967" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.865077 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa6dea0-3e57-4736-9a58-3885f7a30f18-operator-scripts\") pod \"barbican-3c48-account-create-update-58hww\" (UID: \"faa6dea0-3e57-4736-9a58-3885f7a30f18\") " pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.872410 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.881037 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hq6s\" (UniqueName: \"kubernetes.io/projected/faa6dea0-3e57-4736-9a58-3885f7a30f18-kube-api-access-8hq6s\") pod \"barbican-3c48-account-create-update-58hww\" (UID: \"faa6dea0-3e57-4736-9a58-3885f7a30f18\") " pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.885655 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mvgf\" (UniqueName: \"kubernetes.io/projected/176faf0b-5f7d-450b-871e-9d5df2595562-kube-api-access-5mvgf\") pod \"cinder-db-create-5h967\" (UID: \"176faf0b-5f7d-450b-871e-9d5df2595562\") " pod="openstack/cinder-db-create-5h967" Feb 28 13:37:27 crc kubenswrapper[4897]: I0228 13:37:27.950998 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5h967" Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.011276 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xtnhx"] Feb 28 13:37:28 crc kubenswrapper[4897]: W0228 13:37:28.015179 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3f6c67d_0efe_493c_9e09_781291a958cd.slice/crio-0ed632d540c3bcf3a5ecb4407a5ec47e5e784a264d9e26ff242959bc88a5e2d8 WatchSource:0}: Error finding container 0ed632d540c3bcf3a5ecb4407a5ec47e5e784a264d9e26ff242959bc88a5e2d8: Status 404 returned error can't find the container with id 0ed632d540c3bcf3a5ecb4407a5ec47e5e784a264d9e26ff242959bc88a5e2d8 Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.144409 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.274103 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xtnhx" event={"ID":"f3f6c67d-0efe-493c-9e09-781291a958cd","Type":"ContainerStarted","Data":"b613a3e718788c9ba75b9d22df94a16bbd693c7834445b47da1ff794a1acc177"} Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.274142 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xtnhx" event={"ID":"f3f6c67d-0efe-493c-9e09-781291a958cd","Type":"ContainerStarted","Data":"0ed632d540c3bcf3a5ecb4407a5ec47e5e784a264d9e26ff242959bc88a5e2d8"} Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.298778 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-xtnhx" podStartSLOduration=1.298756076 podStartE2EDuration="1.298756076s" podCreationTimestamp="2026-02-28 13:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:37:28.289607986 +0000 UTC m=+1262.531928673" watchObservedRunningTime="2026-02-28 13:37:28.298756076 +0000 UTC m=+1262.541076733" Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.336990 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8ddqn"] Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.390050 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d935-account-create-update-l5hzg"] Feb 28 13:37:28 crc kubenswrapper[4897]: W0228 13:37:28.399170 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod979b68d5_45f7_4a2d_aae1_0e93d2de732e.slice/crio-0d14f6b093980e2f59085d3dd19ba60a9f76f71a21ba581488539327c0125a79 WatchSource:0}: Error finding container 0d14f6b093980e2f59085d3dd19ba60a9f76f71a21ba581488539327c0125a79: Status 404 returned error can't find the container with id 0d14f6b093980e2f59085d3dd19ba60a9f76f71a21ba581488539327c0125a79 Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.473191 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37717215-0e60-4829-8c51-ea7d6efd985d" path="/var/lib/kubelet/pods/37717215-0e60-4829-8c51-ea7d6efd985d/volumes" Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.476727 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5h967"] Feb 28 13:37:28 crc kubenswrapper[4897]: W0228 13:37:28.501845 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod176faf0b_5f7d_450b_871e_9d5df2595562.slice/crio-25c3747acf74b402a4628ce8b2525481919928e73ad8fee375c6e95963a71353 WatchSource:0}: Error finding container 25c3747acf74b402a4628ce8b2525481919928e73ad8fee375c6e95963a71353: Status 404 returned error can't find the container with id 25c3747acf74b402a4628ce8b2525481919928e73ad8fee375c6e95963a71353 Feb 28 13:37:28 crc kubenswrapper[4897]: I0228 13:37:28.673068 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3c48-account-create-update-58hww"] Feb 28 13:37:28 crc kubenswrapper[4897]: W0228 13:37:28.683901 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaa6dea0_3e57_4736_9a58_3885f7a30f18.slice/crio-098fed69cf3d097261387c913a557b25112eda06d17ef4b512d241348237d52d WatchSource:0}: Error finding container 098fed69cf3d097261387c913a557b25112eda06d17ef4b512d241348237d52d: Status 404 returned error can't find the container with id 098fed69cf3d097261387c913a557b25112eda06d17ef4b512d241348237d52d Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.289850 4897 generic.go:334] "Generic (PLEG): container finished" podID="176faf0b-5f7d-450b-871e-9d5df2595562" containerID="158c1b04885211c76f210e7f30c77ce9e50d95266202b2dedcf44d02ddbea3a7" exitCode=0 Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.290015 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5h967" event={"ID":"176faf0b-5f7d-450b-871e-9d5df2595562","Type":"ContainerDied","Data":"158c1b04885211c76f210e7f30c77ce9e50d95266202b2dedcf44d02ddbea3a7"} Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.290685 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5h967" event={"ID":"176faf0b-5f7d-450b-871e-9d5df2595562","Type":"ContainerStarted","Data":"25c3747acf74b402a4628ce8b2525481919928e73ad8fee375c6e95963a71353"} Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.292151 4897 generic.go:334] "Generic (PLEG): container finished" podID="faa6dea0-3e57-4736-9a58-3885f7a30f18" containerID="92d3fd9ca97ff08c5490f83032c9ea110962c3083b40fbe48cca52f134d24e27" exitCode=0 Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.292190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c48-account-create-update-58hww" event={"ID":"faa6dea0-3e57-4736-9a58-3885f7a30f18","Type":"ContainerDied","Data":"92d3fd9ca97ff08c5490f83032c9ea110962c3083b40fbe48cca52f134d24e27"} Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.292205 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c48-account-create-update-58hww" event={"ID":"faa6dea0-3e57-4736-9a58-3885f7a30f18","Type":"ContainerStarted","Data":"098fed69cf3d097261387c913a557b25112eda06d17ef4b512d241348237d52d"} Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.293111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8ddqn" event={"ID":"041df086-096c-4dd1-9e4e-d06a2051084c","Type":"ContainerStarted","Data":"da4b8c26fe256ef8ea43539f0aab43127d3ca0ea77b835901fa4dad0329a54f3"} Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.297911 4897 generic.go:334] "Generic (PLEG): container finished" podID="979b68d5-45f7-4a2d-aae1-0e93d2de732e" containerID="47abe7b299bafd64d8090fc91f1637586bf96ff210b53ecb9974158920438cd5" exitCode=0 Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.297969 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d935-account-create-update-l5hzg" event={"ID":"979b68d5-45f7-4a2d-aae1-0e93d2de732e","Type":"ContainerDied","Data":"47abe7b299bafd64d8090fc91f1637586bf96ff210b53ecb9974158920438cd5"} Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.297987 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d935-account-create-update-l5hzg" event={"ID":"979b68d5-45f7-4a2d-aae1-0e93d2de732e","Type":"ContainerStarted","Data":"0d14f6b093980e2f59085d3dd19ba60a9f76f71a21ba581488539327c0125a79"} Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.299847 4897 generic.go:334] "Generic (PLEG): container finished" podID="f3f6c67d-0efe-493c-9e09-781291a958cd" containerID="b613a3e718788c9ba75b9d22df94a16bbd693c7834445b47da1ff794a1acc177" exitCode=0 Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.299873 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xtnhx" event={"ID":"f3f6c67d-0efe-493c-9e09-781291a958cd","Type":"ContainerDied","Data":"b613a3e718788c9ba75b9d22df94a16bbd693c7834445b47da1ff794a1acc177"} Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.868201 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-ckz4d"] Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.869466 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.871640 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.871876 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-6vtlw" Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.878347 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-ckz4d"] Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.911622 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-config-data\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.911898 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-db-sync-config-data\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.912004 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbxsf\" (UniqueName: \"kubernetes.io/projected/bfd3841c-39bf-454c-88de-5156d769cf7e-kube-api-access-cbxsf\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.912104 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-combined-ca-bundle\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.931250 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gf6hc"] Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.932565 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:29 crc kubenswrapper[4897]: I0228 13:37:29.939249 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gf6hc"] Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.013966 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-config-data\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.014024 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-db-sync-config-data\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.014049 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbxsf\" (UniqueName: \"kubernetes.io/projected/bfd3841c-39bf-454c-88de-5156d769cf7e-kube-api-access-cbxsf\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.014093 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-combined-ca-bundle\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.014138 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e5b42-44ea-4242-b46b-77efc3fb0826-operator-scripts\") pod \"neutron-db-create-gf6hc\" (UID: \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\") " pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.014158 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqkw9\" (UniqueName: \"kubernetes.io/projected/9d8e5b42-44ea-4242-b46b-77efc3fb0826-kube-api-access-qqkw9\") pod \"neutron-db-create-gf6hc\" (UID: \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\") " pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.023300 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-db-sync-config-data\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.026029 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-config-data\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.026649 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-combined-ca-bundle\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.032683 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbxsf\" (UniqueName: \"kubernetes.io/projected/bfd3841c-39bf-454c-88de-5156d769cf7e-kube-api-access-cbxsf\") pod \"watcher-db-sync-ckz4d\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.040921 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-aec9-account-create-update-d7fk5"] Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.042217 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.044705 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.048915 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-aec9-account-create-update-d7fk5"] Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.115689 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e5b42-44ea-4242-b46b-77efc3fb0826-operator-scripts\") pod \"neutron-db-create-gf6hc\" (UID: \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\") " pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.115751 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqkw9\" (UniqueName: \"kubernetes.io/projected/9d8e5b42-44ea-4242-b46b-77efc3fb0826-kube-api-access-qqkw9\") pod \"neutron-db-create-gf6hc\" (UID: \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\") " pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.115872 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-operator-scripts\") pod \"neutron-aec9-account-create-update-d7fk5\" (UID: \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\") " pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.115915 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jft\" (UniqueName: \"kubernetes.io/projected/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-kube-api-access-99jft\") pod \"neutron-aec9-account-create-update-d7fk5\" (UID: \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\") " pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.116441 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e5b42-44ea-4242-b46b-77efc3fb0826-operator-scripts\") pod \"neutron-db-create-gf6hc\" (UID: \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\") " pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.137194 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqkw9\" (UniqueName: \"kubernetes.io/projected/9d8e5b42-44ea-4242-b46b-77efc3fb0826-kube-api-access-qqkw9\") pod \"neutron-db-create-gf6hc\" (UID: \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\") " pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.188218 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.218665 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jft\" (UniqueName: \"kubernetes.io/projected/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-kube-api-access-99jft\") pod \"neutron-aec9-account-create-update-d7fk5\" (UID: \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\") " pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.218814 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-operator-scripts\") pod \"neutron-aec9-account-create-update-d7fk5\" (UID: \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\") " pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.219452 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-operator-scripts\") pod \"neutron-aec9-account-create-update-d7fk5\" (UID: \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\") " pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.234739 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jft\" (UniqueName: \"kubernetes.io/projected/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-kube-api-access-99jft\") pod \"neutron-aec9-account-create-update-d7fk5\" (UID: \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\") " pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.253800 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:30 crc kubenswrapper[4897]: I0228 13:37:30.406436 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:32 crc kubenswrapper[4897]: I0228 13:37:32.337129 4897 generic.go:334] "Generic (PLEG): container finished" podID="d754bb18-6ebe-445e-8826-53d247030dc7" containerID="054dce8b30edb292831f98a5cfee5d3dffb5788c55d5f1d717ac7e3b40882bdc" exitCode=0 Feb 28 13:37:32 crc kubenswrapper[4897]: I0228 13:37:32.337486 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tklpd" event={"ID":"d754bb18-6ebe-445e-8826-53d247030dc7","Type":"ContainerDied","Data":"054dce8b30edb292831f98a5cfee5d3dffb5788c55d5f1d717ac7e3b40882bdc"} Feb 28 13:37:32 crc kubenswrapper[4897]: E0228 13:37:32.910627 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/prometheus-rhel9@sha256=b3c4bd9e6b46c2065b376c6143facb68f7d37997214f5cad5762b2f5e4eca201/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 28 13:37:32 crc kubenswrapper[4897]: E0228 13:37:32.911192 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/prometheus-rhel9@sha256=b3c4bd9e6b46c2065b376c6143facb68f7d37997214f5cad5762b2f5e4eca201/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.249842 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.250692 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5h967" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.258299 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.358258 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5h967" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.358283 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5h967" event={"ID":"176faf0b-5f7d-450b-871e-9d5df2595562","Type":"ContainerDied","Data":"25c3747acf74b402a4628ce8b2525481919928e73ad8fee375c6e95963a71353"} Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.358433 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c3747acf74b402a4628ce8b2525481919928e73ad8fee375c6e95963a71353" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.360301 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c48-account-create-update-58hww" event={"ID":"faa6dea0-3e57-4736-9a58-3885f7a30f18","Type":"ContainerDied","Data":"098fed69cf3d097261387c913a557b25112eda06d17ef4b512d241348237d52d"} Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.360362 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="098fed69cf3d097261387c913a557b25112eda06d17ef4b512d241348237d52d" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.360444 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c48-account-create-update-58hww" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.362562 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d935-account-create-update-l5hzg" event={"ID":"979b68d5-45f7-4a2d-aae1-0e93d2de732e","Type":"ContainerDied","Data":"0d14f6b093980e2f59085d3dd19ba60a9f76f71a21ba581488539327c0125a79"} Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.362620 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d14f6b093980e2f59085d3dd19ba60a9f76f71a21ba581488539327c0125a79" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.368252 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xtnhx" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.368265 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xtnhx" event={"ID":"f3f6c67d-0efe-493c-9e09-781291a958cd","Type":"ContainerDied","Data":"0ed632d540c3bcf3a5ecb4407a5ec47e5e784a264d9e26ff242959bc88a5e2d8"} Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.368470 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed632d540c3bcf3a5ecb4407a5ec47e5e784a264d9e26ff242959bc88a5e2d8" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.381056 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176faf0b-5f7d-450b-871e-9d5df2595562-operator-scripts\") pod \"176faf0b-5f7d-450b-871e-9d5df2595562\" (UID: \"176faf0b-5f7d-450b-871e-9d5df2595562\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.381177 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn6x4\" (UniqueName: \"kubernetes.io/projected/f3f6c67d-0efe-493c-9e09-781291a958cd-kube-api-access-sn6x4\") pod \"f3f6c67d-0efe-493c-9e09-781291a958cd\" (UID: \"f3f6c67d-0efe-493c-9e09-781291a958cd\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.381238 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mvgf\" (UniqueName: \"kubernetes.io/projected/176faf0b-5f7d-450b-871e-9d5df2595562-kube-api-access-5mvgf\") pod \"176faf0b-5f7d-450b-871e-9d5df2595562\" (UID: \"176faf0b-5f7d-450b-871e-9d5df2595562\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.381260 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hq6s\" (UniqueName: \"kubernetes.io/projected/faa6dea0-3e57-4736-9a58-3885f7a30f18-kube-api-access-8hq6s\") pod \"faa6dea0-3e57-4736-9a58-3885f7a30f18\" (UID: \"faa6dea0-3e57-4736-9a58-3885f7a30f18\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.381297 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f6c67d-0efe-493c-9e09-781291a958cd-operator-scripts\") pod \"f3f6c67d-0efe-493c-9e09-781291a958cd\" (UID: \"f3f6c67d-0efe-493c-9e09-781291a958cd\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.381392 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa6dea0-3e57-4736-9a58-3885f7a30f18-operator-scripts\") pod \"faa6dea0-3e57-4736-9a58-3885f7a30f18\" (UID: \"faa6dea0-3e57-4736-9a58-3885f7a30f18\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.381547 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176faf0b-5f7d-450b-871e-9d5df2595562-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "176faf0b-5f7d-450b-871e-9d5df2595562" (UID: "176faf0b-5f7d-450b-871e-9d5df2595562"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.382227 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa6dea0-3e57-4736-9a58-3885f7a30f18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "faa6dea0-3e57-4736-9a58-3885f7a30f18" (UID: "faa6dea0-3e57-4736-9a58-3885f7a30f18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.383164 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f6c67d-0efe-493c-9e09-781291a958cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3f6c67d-0efe-493c-9e09-781291a958cd" (UID: "f3f6c67d-0efe-493c-9e09-781291a958cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.386460 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa6dea0-3e57-4736-9a58-3885f7a30f18-kube-api-access-8hq6s" (OuterVolumeSpecName: "kube-api-access-8hq6s") pod "faa6dea0-3e57-4736-9a58-3885f7a30f18" (UID: "faa6dea0-3e57-4736-9a58-3885f7a30f18"). InnerVolumeSpecName "kube-api-access-8hq6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.388283 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176faf0b-5f7d-450b-871e-9d5df2595562-kube-api-access-5mvgf" (OuterVolumeSpecName: "kube-api-access-5mvgf") pod "176faf0b-5f7d-450b-871e-9d5df2595562" (UID: "176faf0b-5f7d-450b-871e-9d5df2595562"). InnerVolumeSpecName "kube-api-access-5mvgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.389290 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f6c67d-0efe-493c-9e09-781291a958cd-kube-api-access-sn6x4" (OuterVolumeSpecName: "kube-api-access-sn6x4") pod "f3f6c67d-0efe-493c-9e09-781291a958cd" (UID: "f3f6c67d-0efe-493c-9e09-781291a958cd"). InnerVolumeSpecName "kube-api-access-sn6x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.394212 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.482920 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn5nq\" (UniqueName: \"kubernetes.io/projected/979b68d5-45f7-4a2d-aae1-0e93d2de732e-kube-api-access-kn5nq\") pod \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\" (UID: \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.483140 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979b68d5-45f7-4a2d-aae1-0e93d2de732e-operator-scripts\") pod \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\" (UID: \"979b68d5-45f7-4a2d-aae1-0e93d2de732e\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.483462 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176faf0b-5f7d-450b-871e-9d5df2595562-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.483475 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn6x4\" (UniqueName: \"kubernetes.io/projected/f3f6c67d-0efe-493c-9e09-781291a958cd-kube-api-access-sn6x4\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.483486 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mvgf\" (UniqueName: \"kubernetes.io/projected/176faf0b-5f7d-450b-871e-9d5df2595562-kube-api-access-5mvgf\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.483494 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hq6s\" (UniqueName: \"kubernetes.io/projected/faa6dea0-3e57-4736-9a58-3885f7a30f18-kube-api-access-8hq6s\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.483502 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3f6c67d-0efe-493c-9e09-781291a958cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.483510 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa6dea0-3e57-4736-9a58-3885f7a30f18-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.483574 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/979b68d5-45f7-4a2d-aae1-0e93d2de732e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "979b68d5-45f7-4a2d-aae1-0e93d2de732e" (UID: "979b68d5-45f7-4a2d-aae1-0e93d2de732e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.496332 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979b68d5-45f7-4a2d-aae1-0e93d2de732e-kube-api-access-kn5nq" (OuterVolumeSpecName: "kube-api-access-kn5nq") pod "979b68d5-45f7-4a2d-aae1-0e93d2de732e" (UID: "979b68d5-45f7-4a2d-aae1-0e93d2de732e"). InnerVolumeSpecName "kube-api-access-kn5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.528343 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gf6hc"] Feb 28 13:37:33 crc kubenswrapper[4897]: W0228 13:37:33.531777 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d8e5b42_44ea_4242_b46b_77efc3fb0826.slice/crio-d7d57b6f799f4e3d8a28e8dcf43435f5c3d6184fb71b54dea0347943bfd55f2e WatchSource:0}: Error finding container d7d57b6f799f4e3d8a28e8dcf43435f5c3d6184fb71b54dea0347943bfd55f2e: Status 404 returned error can't find the container with id d7d57b6f799f4e3d8a28e8dcf43435f5c3d6184fb71b54dea0347943bfd55f2e Feb 28 13:37:33 crc kubenswrapper[4897]: W0228 13:37:33.535930 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfd3841c_39bf_454c_88de_5156d769cf7e.slice/crio-ec80490a461b4cfa1d86631cc7662e088becc6c1f9d594d236a6c40382362ed2 WatchSource:0}: Error finding container ec80490a461b4cfa1d86631cc7662e088becc6c1f9d594d236a6c40382362ed2: Status 404 returned error can't find the container with id ec80490a461b4cfa1d86631cc7662e088becc6c1f9d594d236a6c40382362ed2 Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.537804 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-ckz4d"] Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.585149 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979b68d5-45f7-4a2d-aae1-0e93d2de732e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.585193 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn5nq\" (UniqueName: \"kubernetes.io/projected/979b68d5-45f7-4a2d-aae1-0e93d2de732e-kube-api-access-kn5nq\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.656781 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-aec9-account-create-update-d7fk5"] Feb 28 13:37:33 crc kubenswrapper[4897]: W0228 13:37:33.685378 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1f06e40_0ffb_4bdc_98ea_e6d44c5d8e42.slice/crio-17eb2ce1718bc7efb6c5fd866ca79fe93d47b0918e577b8a6b9ce8820ce9c7e7 WatchSource:0}: Error finding container 17eb2ce1718bc7efb6c5fd866ca79fe93d47b0918e577b8a6b9ce8820ce9c7e7: Status 404 returned error can't find the container with id 17eb2ce1718bc7efb6c5fd866ca79fe93d47b0918e577b8a6b9ce8820ce9c7e7 Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.747379 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.787157 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-db-sync-config-data\") pod \"d754bb18-6ebe-445e-8826-53d247030dc7\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.787265 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-combined-ca-bundle\") pod \"d754bb18-6ebe-445e-8826-53d247030dc7\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.787332 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpddd\" (UniqueName: \"kubernetes.io/projected/d754bb18-6ebe-445e-8826-53d247030dc7-kube-api-access-cpddd\") pod \"d754bb18-6ebe-445e-8826-53d247030dc7\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.787354 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-config-data\") pod \"d754bb18-6ebe-445e-8826-53d247030dc7\" (UID: \"d754bb18-6ebe-445e-8826-53d247030dc7\") " Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.793244 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d754bb18-6ebe-445e-8826-53d247030dc7-kube-api-access-cpddd" (OuterVolumeSpecName: "kube-api-access-cpddd") pod "d754bb18-6ebe-445e-8826-53d247030dc7" (UID: "d754bb18-6ebe-445e-8826-53d247030dc7"). InnerVolumeSpecName "kube-api-access-cpddd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.796838 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d754bb18-6ebe-445e-8826-53d247030dc7" (UID: "d754bb18-6ebe-445e-8826-53d247030dc7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.829957 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d754bb18-6ebe-445e-8826-53d247030dc7" (UID: "d754bb18-6ebe-445e-8826-53d247030dc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.849372 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-config-data" (OuterVolumeSpecName: "config-data") pod "d754bb18-6ebe-445e-8826-53d247030dc7" (UID: "d754bb18-6ebe-445e-8826-53d247030dc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.889015 4897 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.889048 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.889058 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpddd\" (UniqueName: \"kubernetes.io/projected/d754bb18-6ebe-445e-8826-53d247030dc7-kube-api-access-cpddd\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:33 crc kubenswrapper[4897]: I0228 13:37:33.889067 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d754bb18-6ebe-445e-8826-53d247030dc7-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.385829 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tklpd" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.386748 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tklpd" event={"ID":"d754bb18-6ebe-445e-8826-53d247030dc7","Type":"ContainerDied","Data":"0bf4cf72d83c5f1ba542a9323456fde2bf0a760dd6bd8ceb7e91ce1e45ce31a8"} Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.386802 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf4cf72d83c5f1ba542a9323456fde2bf0a760dd6bd8ceb7e91ce1e45ce31a8" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.391418 4897 generic.go:334] "Generic (PLEG): container finished" podID="e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42" containerID="2654289e63af92b0a9404fbcb959747e21fb74bd21d9959dc9b4fd19a52623aa" exitCode=0 Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.391471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aec9-account-create-update-d7fk5" event={"ID":"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42","Type":"ContainerDied","Data":"2654289e63af92b0a9404fbcb959747e21fb74bd21d9959dc9b4fd19a52623aa"} Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.391492 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aec9-account-create-update-d7fk5" event={"ID":"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42","Type":"ContainerStarted","Data":"17eb2ce1718bc7efb6c5fd866ca79fe93d47b0918e577b8a6b9ce8820ce9c7e7"} Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.395539 4897 generic.go:334] "Generic (PLEG): container finished" podID="9d8e5b42-44ea-4242-b46b-77efc3fb0826" containerID="dd1a5b5aae164239eb2bdc0c49cfc377eee131fb7e96723e40a1d5355b820901" exitCode=0 Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.395614 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gf6hc" event={"ID":"9d8e5b42-44ea-4242-b46b-77efc3fb0826","Type":"ContainerDied","Data":"dd1a5b5aae164239eb2bdc0c49cfc377eee131fb7e96723e40a1d5355b820901"} Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.395644 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gf6hc" event={"ID":"9d8e5b42-44ea-4242-b46b-77efc3fb0826","Type":"ContainerStarted","Data":"d7d57b6f799f4e3d8a28e8dcf43435f5c3d6184fb71b54dea0347943bfd55f2e"} Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.399354 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8ddqn" event={"ID":"041df086-096c-4dd1-9e4e-d06a2051084c","Type":"ContainerStarted","Data":"3ddbbb30e70a2991fd638ecc8a7d5d1ef9c51891a0633f0e5eacc86d16ab1d74"} Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.401885 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d935-account-create-update-l5hzg" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.401902 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-ckz4d" event={"ID":"bfd3841c-39bf-454c-88de-5156d769cf7e","Type":"ContainerStarted","Data":"ec80490a461b4cfa1d86631cc7662e088becc6c1f9d594d236a6c40382362ed2"} Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.458754 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8ddqn" podStartSLOduration=2.712164816 podStartE2EDuration="7.458740201s" podCreationTimestamp="2026-02-28 13:37:27 +0000 UTC" firstStartedPulling="2026-02-28 13:37:28.362540317 +0000 UTC m=+1262.604860974" lastFinishedPulling="2026-02-28 13:37:33.109115702 +0000 UTC m=+1267.351436359" observedRunningTime="2026-02-28 13:37:34.451991919 +0000 UTC m=+1268.694312576" watchObservedRunningTime="2026-02-28 13:37:34.458740201 +0000 UTC m=+1268.701060858" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.674073 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-674ddfcdf9-mbp22"] Feb 28 13:37:34 crc kubenswrapper[4897]: E0228 13:37:34.674668 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176faf0b-5f7d-450b-871e-9d5df2595562" containerName="mariadb-database-create" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.674686 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="176faf0b-5f7d-450b-871e-9d5df2595562" containerName="mariadb-database-create" Feb 28 13:37:34 crc kubenswrapper[4897]: E0228 13:37:34.674706 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979b68d5-45f7-4a2d-aae1-0e93d2de732e" containerName="mariadb-account-create-update" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.674713 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="979b68d5-45f7-4a2d-aae1-0e93d2de732e" containerName="mariadb-account-create-update" Feb 28 13:37:34 crc kubenswrapper[4897]: E0228 13:37:34.674727 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f6c67d-0efe-493c-9e09-781291a958cd" containerName="mariadb-database-create" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.674736 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f6c67d-0efe-493c-9e09-781291a958cd" containerName="mariadb-database-create" Feb 28 13:37:34 crc kubenswrapper[4897]: E0228 13:37:34.674752 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d754bb18-6ebe-445e-8826-53d247030dc7" containerName="glance-db-sync" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.674759 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d754bb18-6ebe-445e-8826-53d247030dc7" containerName="glance-db-sync" Feb 28 13:37:34 crc kubenswrapper[4897]: E0228 13:37:34.674767 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa6dea0-3e57-4736-9a58-3885f7a30f18" containerName="mariadb-account-create-update" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.674773 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa6dea0-3e57-4736-9a58-3885f7a30f18" containerName="mariadb-account-create-update" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.674953 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f6c67d-0efe-493c-9e09-781291a958cd" containerName="mariadb-database-create" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.674983 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa6dea0-3e57-4736-9a58-3885f7a30f18" containerName="mariadb-account-create-update" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.675003 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="979b68d5-45f7-4a2d-aae1-0e93d2de732e" containerName="mariadb-account-create-update" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.675016 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d754bb18-6ebe-445e-8826-53d247030dc7" containerName="glance-db-sync" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.675025 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="176faf0b-5f7d-450b-871e-9d5df2595562" containerName="mariadb-database-create" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.679090 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.701087 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674ddfcdf9-mbp22"] Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.816746 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-config\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.816866 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-sb\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.816893 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-svc\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.816909 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-nb\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.817006 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl9zd\" (UniqueName: \"kubernetes.io/projected/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-kube-api-access-pl9zd\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.817044 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-swift-storage-0\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.918454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl9zd\" (UniqueName: \"kubernetes.io/projected/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-kube-api-access-pl9zd\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.918530 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-swift-storage-0\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.918645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-config\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.918716 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-sb\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.918761 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-svc\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.918792 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-nb\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.919571 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-config\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.919933 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-nb\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.920201 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-swift-storage-0\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.920633 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-sb\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.920919 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-svc\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:34 crc kubenswrapper[4897]: I0228 13:37:34.942052 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl9zd\" (UniqueName: \"kubernetes.io/projected/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-kube-api-access-pl9zd\") pod \"dnsmasq-dns-674ddfcdf9-mbp22\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:35 crc kubenswrapper[4897]: I0228 13:37:35.000106 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:35 crc kubenswrapper[4897]: I0228 13:37:35.452234 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674ddfcdf9-mbp22"] Feb 28 13:37:35 crc kubenswrapper[4897]: W0228 13:37:35.455180 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14ac33d1_ebdc_4644_8fee_b81bbfb0dbaa.slice/crio-51de70554479392612ae46765acfc96f8e0eaa5c3fac6bac98296adb1bab1b0a WatchSource:0}: Error finding container 51de70554479392612ae46765acfc96f8e0eaa5c3fac6bac98296adb1bab1b0a: Status 404 returned error can't find the container with id 51de70554479392612ae46765acfc96f8e0eaa5c3fac6bac98296adb1bab1b0a Feb 28 13:37:35 crc kubenswrapper[4897]: E0228 13:37:35.459088 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:37:35 crc kubenswrapper[4897]: I0228 13:37:35.799267 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:35 crc kubenswrapper[4897]: I0228 13:37:35.918562 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:35 crc kubenswrapper[4897]: I0228 13:37:35.940441 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-operator-scripts\") pod \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\" (UID: \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\") " Feb 28 13:37:35 crc kubenswrapper[4897]: I0228 13:37:35.940658 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jft\" (UniqueName: \"kubernetes.io/projected/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-kube-api-access-99jft\") pod \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\" (UID: \"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42\") " Feb 28 13:37:35 crc kubenswrapper[4897]: I0228 13:37:35.941652 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42" (UID: "e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:35 crc kubenswrapper[4897]: I0228 13:37:35.947692 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-kube-api-access-99jft" (OuterVolumeSpecName: "kube-api-access-99jft") pod "e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42" (UID: "e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42"). InnerVolumeSpecName "kube-api-access-99jft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.042616 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqkw9\" (UniqueName: \"kubernetes.io/projected/9d8e5b42-44ea-4242-b46b-77efc3fb0826-kube-api-access-qqkw9\") pod \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\" (UID: \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\") " Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.042670 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e5b42-44ea-4242-b46b-77efc3fb0826-operator-scripts\") pod \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\" (UID: \"9d8e5b42-44ea-4242-b46b-77efc3fb0826\") " Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.043146 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.043173 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99jft\" (UniqueName: \"kubernetes.io/projected/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42-kube-api-access-99jft\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.043438 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8e5b42-44ea-4242-b46b-77efc3fb0826-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d8e5b42-44ea-4242-b46b-77efc3fb0826" (UID: "9d8e5b42-44ea-4242-b46b-77efc3fb0826"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.048287 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8e5b42-44ea-4242-b46b-77efc3fb0826-kube-api-access-qqkw9" (OuterVolumeSpecName: "kube-api-access-qqkw9") pod "9d8e5b42-44ea-4242-b46b-77efc3fb0826" (UID: "9d8e5b42-44ea-4242-b46b-77efc3fb0826"). InnerVolumeSpecName "kube-api-access-qqkw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.145289 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e5b42-44ea-4242-b46b-77efc3fb0826-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.145362 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqkw9\" (UniqueName: \"kubernetes.io/projected/9d8e5b42-44ea-4242-b46b-77efc3fb0826-kube-api-access-qqkw9\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.424404 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerStarted","Data":"b6cdc38b6b85a1ccc08dddd754259c21e0a6f7f4b71c260e1be20477686e93e8"} Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.426630 4897 generic.go:334] "Generic (PLEG): container finished" podID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" containerID="a09f7ae0619f4efc6a64edde9c510bbce99210b68e1e366545060d64f1f2bbcb" exitCode=0 Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.426701 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" event={"ID":"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa","Type":"ContainerDied","Data":"a09f7ae0619f4efc6a64edde9c510bbce99210b68e1e366545060d64f1f2bbcb"} Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.426728 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" event={"ID":"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa","Type":"ContainerStarted","Data":"51de70554479392612ae46765acfc96f8e0eaa5c3fac6bac98296adb1bab1b0a"} Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.432202 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aec9-account-create-update-d7fk5" event={"ID":"e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42","Type":"ContainerDied","Data":"17eb2ce1718bc7efb6c5fd866ca79fe93d47b0918e577b8a6b9ce8820ce9c7e7"} Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.432279 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17eb2ce1718bc7efb6c5fd866ca79fe93d47b0918e577b8a6b9ce8820ce9c7e7" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.432390 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aec9-account-create-update-d7fk5" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.461958 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gf6hc" Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.511407 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gf6hc" event={"ID":"9d8e5b42-44ea-4242-b46b-77efc3fb0826","Type":"ContainerDied","Data":"d7d57b6f799f4e3d8a28e8dcf43435f5c3d6184fb71b54dea0347943bfd55f2e"} Feb 28 13:37:36 crc kubenswrapper[4897]: I0228 13:37:36.511461 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d57b6f799f4e3d8a28e8dcf43435f5c3d6184fb71b54dea0347943bfd55f2e" Feb 28 13:37:37 crc kubenswrapper[4897]: I0228 13:37:37.472817 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" event={"ID":"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa","Type":"ContainerStarted","Data":"ada95bb3610ab69f074c87814444d49e93e837ed8e5fa9a2dfb572b31251deae"} Feb 28 13:37:37 crc kubenswrapper[4897]: I0228 13:37:37.473944 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:37 crc kubenswrapper[4897]: I0228 13:37:37.509971 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" podStartSLOduration=3.509955044 podStartE2EDuration="3.509955044s" podCreationTimestamp="2026-02-28 13:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:37:37.504859909 +0000 UTC m=+1271.747180556" watchObservedRunningTime="2026-02-28 13:37:37.509955044 +0000 UTC m=+1271.752275701" Feb 28 13:37:38 crc kubenswrapper[4897]: I0228 13:37:38.485765 4897 generic.go:334] "Generic (PLEG): container finished" podID="041df086-096c-4dd1-9e4e-d06a2051084c" containerID="3ddbbb30e70a2991fd638ecc8a7d5d1ef9c51891a0633f0e5eacc86d16ab1d74" exitCode=0 Feb 28 13:37:38 crc kubenswrapper[4897]: I0228 13:37:38.485818 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8ddqn" event={"ID":"041df086-096c-4dd1-9e4e-d06a2051084c","Type":"ContainerDied","Data":"3ddbbb30e70a2991fd638ecc8a7d5d1ef9c51891a0633f0e5eacc86d16ab1d74"} Feb 28 13:37:41 crc kubenswrapper[4897]: E0228 13:37:41.106904 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.235922 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.384555 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-combined-ca-bundle\") pod \"041df086-096c-4dd1-9e4e-d06a2051084c\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.384728 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55s57\" (UniqueName: \"kubernetes.io/projected/041df086-096c-4dd1-9e4e-d06a2051084c-kube-api-access-55s57\") pod \"041df086-096c-4dd1-9e4e-d06a2051084c\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.384755 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-config-data\") pod \"041df086-096c-4dd1-9e4e-d06a2051084c\" (UID: \"041df086-096c-4dd1-9e4e-d06a2051084c\") " Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.394695 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041df086-096c-4dd1-9e4e-d06a2051084c-kube-api-access-55s57" (OuterVolumeSpecName: "kube-api-access-55s57") pod "041df086-096c-4dd1-9e4e-d06a2051084c" (UID: "041df086-096c-4dd1-9e4e-d06a2051084c"). InnerVolumeSpecName "kube-api-access-55s57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.408882 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "041df086-096c-4dd1-9e4e-d06a2051084c" (UID: "041df086-096c-4dd1-9e4e-d06a2051084c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.458686 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-config-data" (OuterVolumeSpecName: "config-data") pod "041df086-096c-4dd1-9e4e-d06a2051084c" (UID: "041df086-096c-4dd1-9e4e-d06a2051084c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.487139 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.487539 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55s57\" (UniqueName: \"kubernetes.io/projected/041df086-096c-4dd1-9e4e-d06a2051084c-kube-api-access-55s57\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.487591 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041df086-096c-4dd1-9e4e-d06a2051084c-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.513406 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8ddqn" event={"ID":"041df086-096c-4dd1-9e4e-d06a2051084c","Type":"ContainerDied","Data":"da4b8c26fe256ef8ea43539f0aab43127d3ca0ea77b835901fa4dad0329a54f3"} Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.513467 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4b8c26fe256ef8ea43539f0aab43127d3ca0ea77b835901fa4dad0329a54f3" Feb 28 13:37:41 crc kubenswrapper[4897]: I0228 13:37:41.513697 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8ddqn" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.552192 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nlm9z"] Feb 28 13:37:42 crc kubenswrapper[4897]: E0228 13:37:42.553140 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8e5b42-44ea-4242-b46b-77efc3fb0826" containerName="mariadb-database-create" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.553219 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8e5b42-44ea-4242-b46b-77efc3fb0826" containerName="mariadb-database-create" Feb 28 13:37:42 crc kubenswrapper[4897]: E0228 13:37:42.553296 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42" containerName="mariadb-account-create-update" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.553375 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42" containerName="mariadb-account-create-update" Feb 28 13:37:42 crc kubenswrapper[4897]: E0228 13:37:42.553437 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041df086-096c-4dd1-9e4e-d06a2051084c" containerName="keystone-db-sync" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.553488 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="041df086-096c-4dd1-9e4e-d06a2051084c" containerName="keystone-db-sync" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.553733 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42" containerName="mariadb-account-create-update" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.553802 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="041df086-096c-4dd1-9e4e-d06a2051084c" containerName="keystone-db-sync" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.553879 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8e5b42-44ea-4242-b46b-77efc3fb0826" containerName="mariadb-database-create" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.554556 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.559232 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.559395 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.559440 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.559644 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxw9x" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.559969 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.560124 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nlm9z"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.577504 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674ddfcdf9-mbp22"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.577722 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" podUID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" containerName="dnsmasq-dns" containerID="cri-o://ada95bb3610ab69f074c87814444d49e93e837ed8e5fa9a2dfb572b31251deae" gracePeriod=10 Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.579722 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.645184 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-combined-ca-bundle\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.645261 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nckrl\" (UniqueName: \"kubernetes.io/projected/8b379adc-1a39-4972-80a3-74161c42728a-kube-api-access-nckrl\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.645325 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-credential-keys\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.645350 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-fernet-keys\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.645421 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-config-data\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.645456 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-scripts\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.651851 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb4d5cf99-8rqmm"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.653884 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.680390 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb4d5cf99-8rqmm"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.747073 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-scripts\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.747172 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-combined-ca-bundle\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.747223 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nckrl\" (UniqueName: \"kubernetes.io/projected/8b379adc-1a39-4972-80a3-74161c42728a-kube-api-access-nckrl\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.747260 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-credential-keys\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.747283 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-fernet-keys\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.747338 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-config-data\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.753708 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-combined-ca-bundle\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.770533 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68bc5769f5-kt85c"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.771383 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-config-data\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.772354 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.777818 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-credential-keys\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.779449 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-scripts\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.779724 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.779899 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.780077 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.780241 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pxngv" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.788625 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-fernet-keys\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.799571 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68bc5769f5-kt85c"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.811914 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nckrl\" (UniqueName: \"kubernetes.io/projected/8b379adc-1a39-4972-80a3-74161c42728a-kube-api-access-nckrl\") pod \"keystone-bootstrap-nlm9z\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.849771 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-nb\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.849840 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-config\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.849896 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-sb\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.849927 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-svc\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.850022 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zl9w\" (UniqueName: \"kubernetes.io/projected/0878e878-db5e-472b-90b9-9d0e8ce035d3-kube-api-access-4zl9w\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.850076 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-swift-storage-0\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.867006 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qc9bp"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.868537 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.871414 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-m6pcq" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.871807 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.872767 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.887958 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-h59fj"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.889384 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.901959 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zq2v9" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.902208 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.902385 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.915818 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qc9bp"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.922745 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.971928 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-config\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.972180 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-sb\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.972321 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015cae83-dbd9-4d4b-84f6-e90aa405acf2-logs\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.972356 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-svc\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.972420 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-combined-ca-bundle\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.972774 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnscv\" (UniqueName: \"kubernetes.io/projected/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-kube-api-access-cnscv\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.972810 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-config-data\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.973028 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/015cae83-dbd9-4d4b-84f6-e90aa405acf2-horizon-secret-key\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.973165 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zl9w\" (UniqueName: \"kubernetes.io/projected/0878e878-db5e-472b-90b9-9d0e8ce035d3-kube-api-access-4zl9w\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.973209 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-scripts\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.973403 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-swift-storage-0\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.973543 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k579\" (UniqueName: \"kubernetes.io/projected/015cae83-dbd9-4d4b-84f6-e90aa405acf2-kube-api-access-9k579\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.973767 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-nb\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.973800 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-config\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.974345 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-svc\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.975128 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-sb\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.975689 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-swift-storage-0\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.975790 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-config\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.975871 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-nb\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.978657 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:37:42 crc kubenswrapper[4897]: I0228 13:37:42.993785 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.030193 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.038368 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5752j\" (UniqueName: \"kubernetes.io/projected/bd9edcf1-516a-46a6-a77b-5061505a58d7-kube-api-access-5752j\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082039 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd9edcf1-516a-46a6-a77b-5061505a58d7-etc-machine-id\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082061 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082077 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-log-httpd\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082103 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-config\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082126 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015cae83-dbd9-4d4b-84f6-e90aa405acf2-logs\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082155 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-db-sync-config-data\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082177 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkf44\" (UniqueName: \"kubernetes.io/projected/d7aff986-c99b-43a7-afc8-b9194ce17385-kube-api-access-dkf44\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082196 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-combined-ca-bundle\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082210 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-scripts\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082228 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-combined-ca-bundle\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082247 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnscv\" (UniqueName: \"kubernetes.io/projected/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-kube-api-access-cnscv\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082265 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-config-data\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082287 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/015cae83-dbd9-4d4b-84f6-e90aa405acf2-horizon-secret-key\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082314 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082350 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-scripts\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.082370 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-scripts\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.087124 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-config-data\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.087372 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015cae83-dbd9-4d4b-84f6-e90aa405acf2-logs\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.087515 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-config-data\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.087562 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k579\" (UniqueName: \"kubernetes.io/projected/015cae83-dbd9-4d4b-84f6-e90aa405acf2-kube-api-access-9k579\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.087611 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-config-data\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.087633 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-run-httpd\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.087772 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-scripts\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.090566 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/015cae83-dbd9-4d4b-84f6-e90aa405acf2-horizon-secret-key\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.092024 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zl9w\" (UniqueName: \"kubernetes.io/projected/0878e878-db5e-472b-90b9-9d0e8ce035d3-kube-api-access-4zl9w\") pod \"dnsmasq-dns-bb4d5cf99-8rqmm\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.099755 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-h59fj"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.110386 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-combined-ca-bundle\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.117068 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-config\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.138109 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k579\" (UniqueName: \"kubernetes.io/projected/015cae83-dbd9-4d4b-84f6-e90aa405acf2-kube-api-access-9k579\") pod \"horizon-68bc5769f5-kt85c\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.139751 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnscv\" (UniqueName: \"kubernetes.io/projected/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-kube-api-access-cnscv\") pod \"neutron-db-sync-qc9bp\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.149942 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192455 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-db-sync-config-data\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192520 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkf44\" (UniqueName: \"kubernetes.io/projected/d7aff986-c99b-43a7-afc8-b9194ce17385-kube-api-access-dkf44\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192551 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-combined-ca-bundle\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192566 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-scripts\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192598 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192636 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-scripts\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192664 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-config-data\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192689 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-config-data\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192704 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-run-httpd\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192730 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5752j\" (UniqueName: \"kubernetes.io/projected/bd9edcf1-516a-46a6-a77b-5061505a58d7-kube-api-access-5752j\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192746 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd9edcf1-516a-46a6-a77b-5061505a58d7-etc-machine-id\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192766 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.192781 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-log-httpd\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.193211 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-log-httpd\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.195217 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.201338 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd9edcf1-516a-46a6-a77b-5061505a58d7-etc-machine-id\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.201745 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-run-httpd\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.212648 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.235197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.237238 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-scripts\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.237604 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-config-data\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.242806 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-config-data\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.242812 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5752j\" (UniqueName: \"kubernetes.io/projected/bd9edcf1-516a-46a6-a77b-5061505a58d7-kube-api-access-5752j\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.243360 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-scripts\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.248817 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-db-sync-config-data\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.252822 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-combined-ca-bundle\") pod \"cinder-db-sync-h59fj\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.256205 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.257713 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h59fj" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.265284 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55d64677cc-lw8j7"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.266974 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.291792 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.292561 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkf44\" (UniqueName: \"kubernetes.io/projected/d7aff986-c99b-43a7-afc8-b9194ce17385-kube-api-access-dkf44\") pod \"ceilometer-0\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.292615 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7vjm5"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.298665 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.299846 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5150cd00-34c3-40d8-bacd-0c9858fbab6b-logs\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.299890 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-config-data\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.299984 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5150cd00-34c3-40d8-bacd-0c9858fbab6b-horizon-secret-key\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.300057 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sclf\" (UniqueName: \"kubernetes.io/projected/5150cd00-34c3-40d8-bacd-0c9858fbab6b-kube-api-access-7sclf\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.300110 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-scripts\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.300961 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.302576 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.302790 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zqz4n" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.338412 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55d64677cc-lw8j7"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.359928 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.369865 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7vjm5"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.393141 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb4d5cf99-8rqmm"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402137 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-config-data\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402186 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-scripts\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402239 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl644\" (UniqueName: \"kubernetes.io/projected/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-kube-api-access-wl644\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402286 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5150cd00-34c3-40d8-bacd-0c9858fbab6b-logs\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402420 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-config-data\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402441 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-combined-ca-bundle\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402476 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-logs\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402496 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-scripts\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402527 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5150cd00-34c3-40d8-bacd-0c9858fbab6b-horizon-secret-key\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402575 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sclf\" (UniqueName: \"kubernetes.io/projected/5150cd00-34c3-40d8-bacd-0c9858fbab6b-kube-api-access-7sclf\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.402980 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-scripts\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.403171 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5150cd00-34c3-40d8-bacd-0c9858fbab6b-logs\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.404158 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-config-data\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.412562 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5150cd00-34c3-40d8-bacd-0c9858fbab6b-horizon-secret-key\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.412624 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fgtj6"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.413733 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.419433 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ntjzb" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.429407 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.429943 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sclf\" (UniqueName: \"kubernetes.io/projected/5150cd00-34c3-40d8-bacd-0c9858fbab6b-kube-api-access-7sclf\") pod \"horizon-55d64677cc-lw8j7\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.445716 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fgtj6"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.467757 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589b5bf549-hvvfk"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.469575 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.478685 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589b5bf549-hvvfk"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.497484 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.503143 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.505579 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-combined-ca-bundle\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.505612 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-db-sync-config-data\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506287 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-logs\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506343 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-scripts\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506412 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gd7s\" (UniqueName: \"kubernetes.io/projected/23dda98f-2840-432f-876f-e180110c6c12-kube-api-access-4gd7s\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506497 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-sb\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506559 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-svc\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506583 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-combined-ca-bundle\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506641 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv9m8\" (UniqueName: \"kubernetes.io/projected/661850a9-a877-476b-b3ae-a6c6f3b3676a-kube-api-access-dv9m8\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506663 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-config-data\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506685 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-swift-storage-0\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506751 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-config\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506776 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl644\" (UniqueName: \"kubernetes.io/projected/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-kube-api-access-wl644\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.506854 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-nb\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.508657 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jzxcb" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.508877 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.508989 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.510386 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-logs\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.512368 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-combined-ca-bundle\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.512717 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.513663 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-scripts\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.517217 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-config-data\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.528805 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.554757 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl644\" (UniqueName: \"kubernetes.io/projected/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-kube-api-access-wl644\") pod \"placement-db-sync-7vjm5\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.585341 4897 generic.go:334] "Generic (PLEG): container finished" podID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" containerID="ada95bb3610ab69f074c87814444d49e93e837ed8e5fa9a2dfb572b31251deae" exitCode=0 Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.585381 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" event={"ID":"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa","Type":"ContainerDied","Data":"ada95bb3610ab69f074c87814444d49e93e837ed8e5fa9a2dfb572b31251deae"} Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.610221 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv9m8\" (UniqueName: \"kubernetes.io/projected/661850a9-a877-476b-b3ae-a6c6f3b3676a-kube-api-access-dv9m8\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.613573 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-swift-storage-0\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.614337 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-config\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.614464 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-nb\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.614542 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-db-sync-config-data\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.614681 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gd7s\" (UniqueName: \"kubernetes.io/projected/23dda98f-2840-432f-876f-e180110c6c12-kube-api-access-4gd7s\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.614754 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-sb\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.614826 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-svc\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.614860 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-combined-ca-bundle\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.615739 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-swift-storage-0\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.618421 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-config\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.620032 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-db-sync-config-data\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.622516 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-svc\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.623201 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-sb\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.624194 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-nb\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.625609 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-combined-ca-bundle\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.628567 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv9m8\" (UniqueName: \"kubernetes.io/projected/661850a9-a877-476b-b3ae-a6c6f3b3676a-kube-api-access-dv9m8\") pod \"barbican-db-sync-fgtj6\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.640577 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gd7s\" (UniqueName: \"kubernetes.io/projected/23dda98f-2840-432f-876f-e180110c6c12-kube-api-access-4gd7s\") pod \"dnsmasq-dns-589b5bf549-hvvfk\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.680051 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nlm9z"] Feb 28 13:37:43 crc kubenswrapper[4897]: W0228 13:37:43.682823 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b379adc_1a39_4972_80a3_74161c42728a.slice/crio-dcfff36f7654594dbf9a861cc7e377967e8f8cfb899c596987c944c244b989c5 WatchSource:0}: Error finding container dcfff36f7654594dbf9a861cc7e377967e8f8cfb899c596987c944c244b989c5: Status 404 returned error can't find the container with id dcfff36f7654594dbf9a861cc7e377967e8f8cfb899c596987c944c244b989c5 Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.694983 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.716119 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.716172 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxc7\" (UniqueName: \"kubernetes.io/projected/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-kube-api-access-zqxc7\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.716192 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.716776 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.716892 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.716927 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.716959 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-logs\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.717009 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.717371 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7vjm5" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.757788 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.807730 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.817743 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.817827 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.817864 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqxc7\" (UniqueName: \"kubernetes.io/projected/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-kube-api-access-zqxc7\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.817885 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.817907 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.817956 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.817974 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.817991 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-logs\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.818394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-logs\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.818614 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.821237 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.825155 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.825440 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.840137 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.851935 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.852036 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqxc7\" (UniqueName: \"kubernetes.io/projected/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-kube-api-access-zqxc7\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.858451 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " pod="openstack/glance-default-external-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.964087 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.965606 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.972123 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.972818 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 13:37:43 crc kubenswrapper[4897]: I0228 13:37:43.987705 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.063966 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qc9bp"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.112269 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68bc5769f5-kt85c"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.154500 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.154580 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.155447 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.155484 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.155595 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqpz8\" (UniqueName: \"kubernetes.io/projected/c199ad99-a479-4d3f-a78f-fce1c2889070-kube-api-access-tqpz8\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.155759 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.156013 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.156069 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-logs\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.156127 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.258643 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.258932 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.260079 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.260244 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqpz8\" (UniqueName: \"kubernetes.io/projected/c199ad99-a479-4d3f-a78f-fce1c2889070-kube-api-access-tqpz8\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.260416 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.260499 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.260615 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-logs\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.262020 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.261152 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.259896 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.262677 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-logs\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.266728 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.267695 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.272706 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.275235 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.286176 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqpz8\" (UniqueName: \"kubernetes.io/projected/c199ad99-a479-4d3f-a78f-fce1c2889070-kube-api-access-tqpz8\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.312864 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.421938 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.432782 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb4d5cf99-8rqmm"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.464906 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl9zd\" (UniqueName: \"kubernetes.io/projected/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-kube-api-access-pl9zd\") pod \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.464954 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-config\") pod \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.465054 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-sb\") pod \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.465088 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-nb\") pod \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.465184 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-svc\") pod \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.465236 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-swift-storage-0\") pod \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\" (UID: \"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa\") " Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.487965 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-kube-api-access-pl9zd" (OuterVolumeSpecName: "kube-api-access-pl9zd") pod "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" (UID: "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa"). InnerVolumeSpecName "kube-api-access-pl9zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.513250 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-h59fj"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.519625 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.529116 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" (UID: "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.551133 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" (UID: "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.564575 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" (UID: "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.566615 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.566791 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.566807 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl9zd\" (UniqueName: \"kubernetes.io/projected/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-kube-api-access-pl9zd\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.566817 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.578209 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-config" (OuterVolumeSpecName: "config") pod "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" (UID: "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.583586 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" (UID: "14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.589021 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.597587 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bc5769f5-kt85c" event={"ID":"015cae83-dbd9-4d4b-84f6-e90aa405acf2","Type":"ContainerStarted","Data":"41b0470c98c47a27ba6690861025ce0d0ba09118d48ccc3568a808d9acb60781"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.613470 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nlm9z" event={"ID":"8b379adc-1a39-4972-80a3-74161c42728a","Type":"ContainerStarted","Data":"eef4761385fed15a9a6e49dccbd224ac9626ad14ebd4ecdc6a0f42ee0b6d8e58"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.613521 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nlm9z" event={"ID":"8b379adc-1a39-4972-80a3-74161c42728a","Type":"ContainerStarted","Data":"dcfff36f7654594dbf9a861cc7e377967e8f8cfb899c596987c944c244b989c5"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.618884 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7aff986-c99b-43a7-afc8-b9194ce17385","Type":"ContainerStarted","Data":"064dca034838227707564d66d78142ca96bd2b8843684309705195a1b7fa45e6"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.645190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qc9bp" event={"ID":"99d7bd5a-52d0-4a8f-bd1d-542a957d815f","Type":"ContainerStarted","Data":"316f4ff8d86c10b2247ca63060e60068ea860f8805bf6b8a025f41581a628fb2"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.645236 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qc9bp" event={"ID":"99d7bd5a-52d0-4a8f-bd1d-542a957d815f","Type":"ContainerStarted","Data":"674673298ab64739a2e39e1fc27af8cd4ce8053084d302ff1d53d768d1621c58"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.646704 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" event={"ID":"0878e878-db5e-472b-90b9-9d0e8ce035d3","Type":"ContainerStarted","Data":"e2497c045195dc21c39099f7a719431bd90fbf65f1b393c93bc77704b272cca7"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.656620 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-ckz4d" event={"ID":"bfd3841c-39bf-454c-88de-5156d769cf7e","Type":"ContainerStarted","Data":"acd013fc13fa55135de1a45d2aa5c536b91a97bb3bd8e14bb174f4c0bebf8c6e"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.664111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" event={"ID":"14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa","Type":"ContainerDied","Data":"51de70554479392612ae46765acfc96f8e0eaa5c3fac6bac98296adb1bab1b0a"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.664145 4897 scope.go:117] "RemoveContainer" containerID="ada95bb3610ab69f074c87814444d49e93e837ed8e5fa9a2dfb572b31251deae" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.664246 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674ddfcdf9-mbp22" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.669135 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nlm9z" podStartSLOduration=2.6691171689999997 podStartE2EDuration="2.669117169s" podCreationTimestamp="2026-02-28 13:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:37:44.638753026 +0000 UTC m=+1278.881073683" watchObservedRunningTime="2026-02-28 13:37:44.669117169 +0000 UTC m=+1278.911437826" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.674217 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.674258 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.681804 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h59fj" event={"ID":"bd9edcf1-516a-46a6-a77b-5061505a58d7","Type":"ContainerStarted","Data":"c5185e76c1fc66c3cf72be3b666a7476464ba1d7d181dc393c829e31549fcc67"} Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.696405 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-ckz4d" podStartSLOduration=5.197989903 podStartE2EDuration="15.696381343s" podCreationTimestamp="2026-02-28 13:37:29 +0000 UTC" firstStartedPulling="2026-02-28 13:37:33.538056857 +0000 UTC m=+1267.780377524" lastFinishedPulling="2026-02-28 13:37:44.036448307 +0000 UTC m=+1278.278768964" observedRunningTime="2026-02-28 13:37:44.693397259 +0000 UTC m=+1278.935717906" watchObservedRunningTime="2026-02-28 13:37:44.696381343 +0000 UTC m=+1278.938702000" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.715077 4897 scope.go:117] "RemoveContainer" containerID="a09f7ae0619f4efc6a64edde9c510bbce99210b68e1e366545060d64f1f2bbcb" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.719760 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qc9bp" podStartSLOduration=2.719740297 podStartE2EDuration="2.719740297s" podCreationTimestamp="2026-02-28 13:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:37:44.670542589 +0000 UTC m=+1278.912863246" watchObservedRunningTime="2026-02-28 13:37:44.719740297 +0000 UTC m=+1278.962060954" Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.779218 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589b5bf549-hvvfk"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.791291 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fgtj6"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.827340 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55d64677cc-lw8j7"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.858713 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7vjm5"] Feb 28 13:37:44 crc kubenswrapper[4897]: W0228 13:37:44.869681 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5150cd00_34c3_40d8_bacd_0c9858fbab6b.slice/crio-fe2c29830940cebf630228cc32eb2334556062b7e9acf9f7bbea78834cbe8c24 WatchSource:0}: Error finding container fe2c29830940cebf630228cc32eb2334556062b7e9acf9f7bbea78834cbe8c24: Status 404 returned error can't find the container with id fe2c29830940cebf630228cc32eb2334556062b7e9acf9f7bbea78834cbe8c24 Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.895055 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674ddfcdf9-mbp22"] Feb 28 13:37:44 crc kubenswrapper[4897]: I0228 13:37:44.914146 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-674ddfcdf9-mbp22"] Feb 28 13:37:45 crc kubenswrapper[4897]: I0228 13:37:45.714626 4897 generic.go:334] "Generic (PLEG): container finished" podID="23dda98f-2840-432f-876f-e180110c6c12" containerID="b585eeb1ace494c7d19f81d413135e43fbb58029cb99e32b6d542b606d451b3a" exitCode=0 Feb 28 13:37:45 crc kubenswrapper[4897]: I0228 13:37:45.714794 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" event={"ID":"23dda98f-2840-432f-876f-e180110c6c12","Type":"ContainerDied","Data":"b585eeb1ace494c7d19f81d413135e43fbb58029cb99e32b6d542b606d451b3a"} Feb 28 13:37:45 crc kubenswrapper[4897]: I0228 13:37:45.715084 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" event={"ID":"23dda98f-2840-432f-876f-e180110c6c12","Type":"ContainerStarted","Data":"e06eb1a9806e202ee28a707f3dc01b31b22ca8fb412700a835e241bbf3610df9"} Feb 28 13:37:45 crc kubenswrapper[4897]: I0228 13:37:45.730413 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55d64677cc-lw8j7" event={"ID":"5150cd00-34c3-40d8-bacd-0c9858fbab6b","Type":"ContainerStarted","Data":"fe2c29830940cebf630228cc32eb2334556062b7e9acf9f7bbea78834cbe8c24"} Feb 28 13:37:45 crc kubenswrapper[4897]: I0228 13:37:45.747179 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fgtj6" event={"ID":"661850a9-a877-476b-b3ae-a6c6f3b3676a","Type":"ContainerStarted","Data":"657c248d8e6829eadc45285f755df35ba872cb91106aa591907ec5ee289f81b9"} Feb 28 13:37:45 crc kubenswrapper[4897]: I0228 13:37:45.750995 4897 generic.go:334] "Generic (PLEG): container finished" podID="0878e878-db5e-472b-90b9-9d0e8ce035d3" containerID="a41e05353086796980dcdd57e1bcf797e0d70db2599c1bc6e8eab2240af8a3f1" exitCode=0 Feb 28 13:37:45 crc kubenswrapper[4897]: I0228 13:37:45.751092 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" event={"ID":"0878e878-db5e-472b-90b9-9d0e8ce035d3","Type":"ContainerDied","Data":"a41e05353086796980dcdd57e1bcf797e0d70db2599c1bc6e8eab2240af8a3f1"} Feb 28 13:37:45 crc kubenswrapper[4897]: I0228 13:37:45.761789 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7vjm5" event={"ID":"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6","Type":"ContainerStarted","Data":"b025c8d89f1355915452229df555a300e51cc55e81a101d52dc118a34d3a7562"} Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.083496 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.162419 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.236119 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zl9w\" (UniqueName: \"kubernetes.io/projected/0878e878-db5e-472b-90b9-9d0e8ce035d3-kube-api-access-4zl9w\") pod \"0878e878-db5e-472b-90b9-9d0e8ce035d3\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.236218 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-config\") pod \"0878e878-db5e-472b-90b9-9d0e8ce035d3\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.236332 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-swift-storage-0\") pod \"0878e878-db5e-472b-90b9-9d0e8ce035d3\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.236375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-svc\") pod \"0878e878-db5e-472b-90b9-9d0e8ce035d3\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.236413 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-nb\") pod \"0878e878-db5e-472b-90b9-9d0e8ce035d3\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.236454 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-sb\") pod \"0878e878-db5e-472b-90b9-9d0e8ce035d3\" (UID: \"0878e878-db5e-472b-90b9-9d0e8ce035d3\") " Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.255569 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0878e878-db5e-472b-90b9-9d0e8ce035d3-kube-api-access-4zl9w" (OuterVolumeSpecName: "kube-api-access-4zl9w") pod "0878e878-db5e-472b-90b9-9d0e8ce035d3" (UID: "0878e878-db5e-472b-90b9-9d0e8ce035d3"). InnerVolumeSpecName "kube-api-access-4zl9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.265532 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0878e878-db5e-472b-90b9-9d0e8ce035d3" (UID: "0878e878-db5e-472b-90b9-9d0e8ce035d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.266811 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-config" (OuterVolumeSpecName: "config") pod "0878e878-db5e-472b-90b9-9d0e8ce035d3" (UID: "0878e878-db5e-472b-90b9-9d0e8ce035d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.278719 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0878e878-db5e-472b-90b9-9d0e8ce035d3" (UID: "0878e878-db5e-472b-90b9-9d0e8ce035d3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.292500 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0878e878-db5e-472b-90b9-9d0e8ce035d3" (UID: "0878e878-db5e-472b-90b9-9d0e8ce035d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.296374 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0878e878-db5e-472b-90b9-9d0e8ce035d3" (UID: "0878e878-db5e-472b-90b9-9d0e8ce035d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.344813 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.344855 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.344866 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.344904 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.344915 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0878e878-db5e-472b-90b9-9d0e8ce035d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.344923 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zl9w\" (UniqueName: \"kubernetes.io/projected/0878e878-db5e-472b-90b9-9d0e8ce035d3-kube-api-access-4zl9w\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.502537 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" path="/var/lib/kubelet/pods/14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa/volumes" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.503431 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.553356 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55d64677cc-lw8j7"] Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.613494 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.671371 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.687372 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85bccd86cc-mcgvg"] Feb 28 13:37:46 crc kubenswrapper[4897]: E0228 13:37:46.687876 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" containerName="dnsmasq-dns" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.687890 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" containerName="dnsmasq-dns" Feb 28 13:37:46 crc kubenswrapper[4897]: E0228 13:37:46.687904 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0878e878-db5e-472b-90b9-9d0e8ce035d3" containerName="init" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.687909 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0878e878-db5e-472b-90b9-9d0e8ce035d3" containerName="init" Feb 28 13:37:46 crc kubenswrapper[4897]: E0228 13:37:46.687940 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" containerName="init" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.687946 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" containerName="init" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.688125 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0878e878-db5e-472b-90b9-9d0e8ce035d3" containerName="init" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.688156 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ac33d1-ebdc-4644-8fee-b81bbfb0dbaa" containerName="dnsmasq-dns" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.689109 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.712417 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85bccd86cc-mcgvg"] Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.758695 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms8x\" (UniqueName: \"kubernetes.io/projected/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-kube-api-access-bms8x\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.758759 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-scripts\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.758923 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-config-data\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.758948 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-horizon-secret-key\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.758994 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-logs\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.850969 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" event={"ID":"23dda98f-2840-432f-876f-e180110c6c12","Type":"ContainerStarted","Data":"3b4b6fc4c3c84623f5a7937c1a34cf377b3c073e820dade941bd72fde3942816"} Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.851167 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.852652 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f964b88-06a6-4ed0-9e1a-6a7338fba9be","Type":"ContainerStarted","Data":"2a12816c481c15948fdb85d5966c637c6b9280dec3df86624234b6f2854224e7"} Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.857578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" event={"ID":"0878e878-db5e-472b-90b9-9d0e8ce035d3","Type":"ContainerDied","Data":"e2497c045195dc21c39099f7a719431bd90fbf65f1b393c93bc77704b272cca7"} Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.857622 4897 scope.go:117] "RemoveContainer" containerID="a41e05353086796980dcdd57e1bcf797e0d70db2599c1bc6e8eab2240af8a3f1" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.857771 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb4d5cf99-8rqmm" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.860605 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-logs\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.860767 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bms8x\" (UniqueName: \"kubernetes.io/projected/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-kube-api-access-bms8x\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.860901 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-scripts\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.861052 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-config-data\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.861082 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-horizon-secret-key\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.861973 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-logs\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.863152 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-scripts\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.864582 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-config-data\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.880885 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" podStartSLOduration=3.880864776 podStartE2EDuration="3.880864776s" podCreationTimestamp="2026-02-28 13:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:37:46.874265329 +0000 UTC m=+1281.116585986" watchObservedRunningTime="2026-02-28 13:37:46.880864776 +0000 UTC m=+1281.123185453" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.882659 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bms8x\" (UniqueName: \"kubernetes.io/projected/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-kube-api-access-bms8x\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.885883 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-horizon-secret-key\") pod \"horizon-85bccd86cc-mcgvg\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:46 crc kubenswrapper[4897]: I0228 13:37:46.980364 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb4d5cf99-8rqmm"] Feb 28 13:37:47 crc kubenswrapper[4897]: I0228 13:37:47.005742 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb4d5cf99-8rqmm"] Feb 28 13:37:47 crc kubenswrapper[4897]: I0228 13:37:47.077513 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:37:47 crc kubenswrapper[4897]: I0228 13:37:47.125547 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:37:47 crc kubenswrapper[4897]: I0228 13:37:47.762370 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85bccd86cc-mcgvg"] Feb 28 13:37:47 crc kubenswrapper[4897]: I0228 13:37:47.869882 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85bccd86cc-mcgvg" event={"ID":"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81","Type":"ContainerStarted","Data":"a9fb18997cd0b01a40bc1484e2080e23d330ce4bc6052bf70654c97adab052d0"} Feb 28 13:37:47 crc kubenswrapper[4897]: I0228 13:37:47.874608 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f964b88-06a6-4ed0-9e1a-6a7338fba9be","Type":"ContainerStarted","Data":"fbfe90ad209b1c03b72e5dcddcd6a857e0be775fdb611338217d0f9b18b36e0e"} Feb 28 13:37:47 crc kubenswrapper[4897]: I0228 13:37:47.877479 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c199ad99-a479-4d3f-a78f-fce1c2889070","Type":"ContainerStarted","Data":"47a4e2dfd9934f60fb8f6789b424e99159af60cd73089f745d337c5c4d4f3d4c"} Feb 28 13:37:48 crc kubenswrapper[4897]: I0228 13:37:48.473289 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0878e878-db5e-472b-90b9-9d0e8ce035d3" path="/var/lib/kubelet/pods/0878e878-db5e-472b-90b9-9d0e8ce035d3/volumes" Feb 28 13:37:48 crc kubenswrapper[4897]: I0228 13:37:48.897788 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c199ad99-a479-4d3f-a78f-fce1c2889070","Type":"ContainerStarted","Data":"f21848a10ad018f1f2a68714c51863cc87a58caa808c5be74d9984f1ab7e3383"} Feb 28 13:37:49 crc kubenswrapper[4897]: I0228 13:37:49.915826 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f964b88-06a6-4ed0-9e1a-6a7338fba9be","Type":"ContainerStarted","Data":"41b1d74b6f2ba97548e6bafb2c5bdc4dce7eb8a162527b84976c63eded1728b7"} Feb 28 13:37:49 crc kubenswrapper[4897]: I0228 13:37:49.915972 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerName="glance-log" containerID="cri-o://fbfe90ad209b1c03b72e5dcddcd6a857e0be775fdb611338217d0f9b18b36e0e" gracePeriod=30 Feb 28 13:37:49 crc kubenswrapper[4897]: I0228 13:37:49.916135 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerName="glance-httpd" containerID="cri-o://41b1d74b6f2ba97548e6bafb2c5bdc4dce7eb8a162527b84976c63eded1728b7" gracePeriod=30 Feb 28 13:37:49 crc kubenswrapper[4897]: I0228 13:37:49.939663 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.939648124 podStartE2EDuration="6.939648124s" podCreationTimestamp="2026-02-28 13:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:37:49.93560767 +0000 UTC m=+1284.177928327" watchObservedRunningTime="2026-02-28 13:37:49.939648124 +0000 UTC m=+1284.181968781" Feb 28 13:37:50 crc kubenswrapper[4897]: I0228 13:37:50.933092 4897 generic.go:334] "Generic (PLEG): container finished" podID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerID="41b1d74b6f2ba97548e6bafb2c5bdc4dce7eb8a162527b84976c63eded1728b7" exitCode=0 Feb 28 13:37:50 crc kubenswrapper[4897]: I0228 13:37:50.933669 4897 generic.go:334] "Generic (PLEG): container finished" podID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerID="fbfe90ad209b1c03b72e5dcddcd6a857e0be775fdb611338217d0f9b18b36e0e" exitCode=143 Feb 28 13:37:50 crc kubenswrapper[4897]: I0228 13:37:50.933217 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f964b88-06a6-4ed0-9e1a-6a7338fba9be","Type":"ContainerDied","Data":"41b1d74b6f2ba97548e6bafb2c5bdc4dce7eb8a162527b84976c63eded1728b7"} Feb 28 13:37:50 crc kubenswrapper[4897]: I0228 13:37:50.933772 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f964b88-06a6-4ed0-9e1a-6a7338fba9be","Type":"ContainerDied","Data":"fbfe90ad209b1c03b72e5dcddcd6a857e0be775fdb611338217d0f9b18b36e0e"} Feb 28 13:37:50 crc kubenswrapper[4897]: I0228 13:37:50.935738 4897 generic.go:334] "Generic (PLEG): container finished" podID="8b379adc-1a39-4972-80a3-74161c42728a" containerID="eef4761385fed15a9a6e49dccbd224ac9626ad14ebd4ecdc6a0f42ee0b6d8e58" exitCode=0 Feb 28 13:37:50 crc kubenswrapper[4897]: I0228 13:37:50.935803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nlm9z" event={"ID":"8b379adc-1a39-4972-80a3-74161c42728a","Type":"ContainerDied","Data":"eef4761385fed15a9a6e49dccbd224ac9626ad14ebd4ecdc6a0f42ee0b6d8e58"} Feb 28 13:37:50 crc kubenswrapper[4897]: I0228 13:37:50.939714 4897 generic.go:334] "Generic (PLEG): container finished" podID="bfd3841c-39bf-454c-88de-5156d769cf7e" containerID="acd013fc13fa55135de1a45d2aa5c536b91a97bb3bd8e14bb174f4c0bebf8c6e" exitCode=0 Feb 28 13:37:50 crc kubenswrapper[4897]: I0228 13:37:50.939759 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-ckz4d" event={"ID":"bfd3841c-39bf-454c-88de-5156d769cf7e","Type":"ContainerDied","Data":"acd013fc13fa55135de1a45d2aa5c536b91a97bb3bd8e14bb174f4c0bebf8c6e"} Feb 28 13:37:51 crc kubenswrapper[4897]: E0228 13:37:51.182880 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.743572 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68bc5769f5-kt85c"] Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.786617 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6fb67c45d-s75qr"] Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.788170 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.793444 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.798509 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fb67c45d-s75qr"] Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.854752 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85bccd86cc-mcgvg"] Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.875103 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7df779db98-ljwk8"] Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.877437 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891661 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-secret-key\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891710 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0db6a4f-19e4-488c-bc45-9619565bdf57-config-data\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891735 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-horizon-tls-certs\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891761 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-scripts\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891788 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-config-data\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891811 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-horizon-secret-key\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891859 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6102738c-6c77-48c6-87e1-67853cf8ce43-logs\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891877 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0db6a4f-19e4-488c-bc45-9619565bdf57-scripts\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891902 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4g7r\" (UniqueName: \"kubernetes.io/projected/e0db6a4f-19e4-488c-bc45-9619565bdf57-kube-api-access-f4g7r\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891937 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq2dn\" (UniqueName: \"kubernetes.io/projected/6102738c-6c77-48c6-87e1-67853cf8ce43-kube-api-access-kq2dn\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891959 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-combined-ca-bundle\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891976 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0db6a4f-19e4-488c-bc45-9619565bdf57-logs\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.891997 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-combined-ca-bundle\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.892018 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-tls-certs\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.895760 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7df779db98-ljwk8"] Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996433 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq2dn\" (UniqueName: \"kubernetes.io/projected/6102738c-6c77-48c6-87e1-67853cf8ce43-kube-api-access-kq2dn\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996485 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-combined-ca-bundle\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996505 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0db6a4f-19e4-488c-bc45-9619565bdf57-logs\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996534 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-combined-ca-bundle\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996556 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-tls-certs\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996582 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-secret-key\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996612 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0db6a4f-19e4-488c-bc45-9619565bdf57-config-data\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996638 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-horizon-tls-certs\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996663 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-scripts\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996690 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-config-data\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996714 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-horizon-secret-key\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996759 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6102738c-6c77-48c6-87e1-67853cf8ce43-logs\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996775 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0db6a4f-19e4-488c-bc45-9619565bdf57-scripts\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.996804 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4g7r\" (UniqueName: \"kubernetes.io/projected/e0db6a4f-19e4-488c-bc45-9619565bdf57-kube-api-access-f4g7r\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:51 crc kubenswrapper[4897]: I0228 13:37:51.998255 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0db6a4f-19e4-488c-bc45-9619565bdf57-config-data\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:51.998672 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6102738c-6c77-48c6-87e1-67853cf8ce43-logs\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:51.999217 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0db6a4f-19e4-488c-bc45-9619565bdf57-scripts\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.003445 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-horizon-tls-certs\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.003724 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0db6a4f-19e4-488c-bc45-9619565bdf57-logs\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.003728 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-horizon-secret-key\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.003904 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0db6a4f-19e4-488c-bc45-9619565bdf57-combined-ca-bundle\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.004165 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-scripts\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.005155 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-config-data\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.013834 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-tls-certs\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.014277 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-combined-ca-bundle\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.071820 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-secret-key\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.076087 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq2dn\" (UniqueName: \"kubernetes.io/projected/6102738c-6c77-48c6-87e1-67853cf8ce43-kube-api-access-kq2dn\") pod \"horizon-6fb67c45d-s75qr\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.076969 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4g7r\" (UniqueName: \"kubernetes.io/projected/e0db6a4f-19e4-488c-bc45-9619565bdf57-kube-api-access-f4g7r\") pod \"horizon-7df779db98-ljwk8\" (UID: \"e0db6a4f-19e4-488c-bc45-9619565bdf57\") " pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.115875 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:37:52 crc kubenswrapper[4897]: I0228 13:37:52.214460 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:37:53 crc kubenswrapper[4897]: I0228 13:37:53.809164 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:37:53 crc kubenswrapper[4897]: I0228 13:37:53.878736 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57979d558f-jkqtc"] Feb 28 13:37:53 crc kubenswrapper[4897]: I0228 13:37:53.879055 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="dnsmasq-dns" containerID="cri-o://d349c1a341a6d70e2d26d824328e437b41aa4a3630dbca9b47f87c1b38868b4a" gracePeriod=10 Feb 28 13:37:55 crc kubenswrapper[4897]: I0228 13:37:55.006487 4897 generic.go:334] "Generic (PLEG): container finished" podID="4fdf7502-e691-4668-86f9-256befb8cb69" containerID="d349c1a341a6d70e2d26d824328e437b41aa4a3630dbca9b47f87c1b38868b4a" exitCode=0 Feb 28 13:37:55 crc kubenswrapper[4897]: I0228 13:37:55.006546 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" event={"ID":"4fdf7502-e691-4668-86f9-256befb8cb69","Type":"ContainerDied","Data":"d349c1a341a6d70e2d26d824328e437b41aa4a3630dbca9b47f87c1b38868b4a"} Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.777348 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.900843 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.925448 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-config-data\") pod \"bfd3841c-39bf-454c-88de-5156d769cf7e\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.925517 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-combined-ca-bundle\") pod \"bfd3841c-39bf-454c-88de-5156d769cf7e\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.925705 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbxsf\" (UniqueName: \"kubernetes.io/projected/bfd3841c-39bf-454c-88de-5156d769cf7e-kube-api-access-cbxsf\") pod \"bfd3841c-39bf-454c-88de-5156d769cf7e\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.925729 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-db-sync-config-data\") pod \"bfd3841c-39bf-454c-88de-5156d769cf7e\" (UID: \"bfd3841c-39bf-454c-88de-5156d769cf7e\") " Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.935633 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfd3841c-39bf-454c-88de-5156d769cf7e-kube-api-access-cbxsf" (OuterVolumeSpecName: "kube-api-access-cbxsf") pod "bfd3841c-39bf-454c-88de-5156d769cf7e" (UID: "bfd3841c-39bf-454c-88de-5156d769cf7e"). InnerVolumeSpecName "kube-api-access-cbxsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.937543 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bfd3841c-39bf-454c-88de-5156d769cf7e" (UID: "bfd3841c-39bf-454c-88de-5156d769cf7e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:57 crc kubenswrapper[4897]: I0228 13:37:57.987227 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfd3841c-39bf-454c-88de-5156d769cf7e" (UID: "bfd3841c-39bf-454c-88de-5156d769cf7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:58 crc kubenswrapper[4897]: I0228 13:37:58.023019 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-config-data" (OuterVolumeSpecName: "config-data") pod "bfd3841c-39bf-454c-88de-5156d769cf7e" (UID: "bfd3841c-39bf-454c-88de-5156d769cf7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:37:58 crc kubenswrapper[4897]: I0228 13:37:58.028139 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:58 crc kubenswrapper[4897]: I0228 13:37:58.028159 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:58 crc kubenswrapper[4897]: I0228 13:37:58.028172 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbxsf\" (UniqueName: \"kubernetes.io/projected/bfd3841c-39bf-454c-88de-5156d769cf7e-kube-api-access-cbxsf\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:58 crc kubenswrapper[4897]: I0228 13:37:58.028181 4897 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfd3841c-39bf-454c-88de-5156d769cf7e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:37:58 crc kubenswrapper[4897]: I0228 13:37:58.040951 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-ckz4d" event={"ID":"bfd3841c-39bf-454c-88de-5156d769cf7e","Type":"ContainerDied","Data":"ec80490a461b4cfa1d86631cc7662e088becc6c1f9d594d236a6c40382362ed2"} Feb 28 13:37:58 crc kubenswrapper[4897]: I0228 13:37:58.040981 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec80490a461b4cfa1d86631cc7662e088becc6c1f9d594d236a6c40382362ed2" Feb 28 13:37:58 crc kubenswrapper[4897]: I0228 13:37:58.041041 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-ckz4d" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.189564 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:37:59 crc kubenswrapper[4897]: E0228 13:37:59.190279 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd3841c-39bf-454c-88de-5156d769cf7e" containerName="watcher-db-sync" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.190292 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd3841c-39bf-454c-88de-5156d769cf7e" containerName="watcher-db-sync" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.190523 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd3841c-39bf-454c-88de-5156d769cf7e" containerName="watcher-db-sync" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.191187 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.195354 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-6vtlw" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.195547 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.216852 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.304234 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.306548 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.312465 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.336981 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.339577 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.344646 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.349425 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b88f822-8f2a-473a-b388-b144a37ba4f0-logs\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.349572 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkp4h\" (UniqueName: \"kubernetes.io/projected/2b88f822-8f2a-473a-b388-b144a37ba4f0-kube-api-access-jkp4h\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.349612 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.349705 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.366747 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.398077 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451152 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7skz\" (UniqueName: \"kubernetes.io/projected/ea8b2284-fafa-4fca-b367-9cffa5f5a201-kube-api-access-s7skz\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451201 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-config-data\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451236 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkp4h\" (UniqueName: \"kubernetes.io/projected/2b88f822-8f2a-473a-b388-b144a37ba4f0-kube-api-access-jkp4h\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451255 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trtv2\" (UniqueName: \"kubernetes.io/projected/2ff09d8c-69de-4c11-8e94-90fce8f42387-kube-api-access-trtv2\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451456 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451495 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-config-data\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451631 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ff09d8c-69de-4c11-8e94-90fce8f42387-logs\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451721 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451819 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451967 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8b2284-fafa-4fca-b367-9cffa5f5a201-logs\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.451989 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b88f822-8f2a-473a-b388-b144a37ba4f0-logs\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.452009 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.452530 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b88f822-8f2a-473a-b388-b144a37ba4f0-logs\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.457716 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.458437 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.478222 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkp4h\" (UniqueName: \"kubernetes.io/projected/2b88f822-8f2a-473a-b388-b144a37ba4f0-kube-api-access-jkp4h\") pod \"watcher-decision-engine-0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.507465 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.553928 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554033 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8b2284-fafa-4fca-b367-9cffa5f5a201-logs\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554055 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554127 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7skz\" (UniqueName: \"kubernetes.io/projected/ea8b2284-fafa-4fca-b367-9cffa5f5a201-kube-api-access-s7skz\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554142 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-config-data\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554168 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trtv2\" (UniqueName: \"kubernetes.io/projected/2ff09d8c-69de-4c11-8e94-90fce8f42387-kube-api-access-trtv2\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554195 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-config-data\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554211 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ff09d8c-69de-4c11-8e94-90fce8f42387-logs\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554551 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8b2284-fafa-4fca-b367-9cffa5f5a201-logs\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.554607 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ff09d8c-69de-4c11-8e94-90fce8f42387-logs\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.558047 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-config-data\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.559849 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.561140 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-config-data\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.561381 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.583069 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7skz\" (UniqueName: \"kubernetes.io/projected/ea8b2284-fafa-4fca-b367-9cffa5f5a201-kube-api-access-s7skz\") pod \"watcher-api-0\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " pod="openstack/watcher-api-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.586273 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trtv2\" (UniqueName: \"kubernetes.io/projected/2ff09d8c-69de-4c11-8e94-90fce8f42387-kube-api-access-trtv2\") pod \"watcher-applier-0\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.632381 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 28 13:37:59 crc kubenswrapper[4897]: I0228 13:37:59.667657 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:38:00 crc kubenswrapper[4897]: E0228 13:38:00.053927 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Feb 28 13:38:00 crc kubenswrapper[4897]: E0228 13:38:00.053982 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Feb 28 13:38:00 crc kubenswrapper[4897]: E0228 13:38:00.054102 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.80:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wl644,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-7vjm5_openstack(5fc315f1-a65d-4ba7-aa89-69ffe04b53a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:38:00 crc kubenswrapper[4897]: E0228 13:38:00.056219 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-7vjm5" podUID="5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" Feb 28 13:38:00 crc kubenswrapper[4897]: E0228 13:38:00.084746 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 28 13:38:00 crc kubenswrapper[4897]: E0228 13:38:00.084804 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 28 13:38:00 crc kubenswrapper[4897]: E0228 13:38:00.084962 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n87h68ch57ch5dch586hf8h5d5h56fhb5h678h687hb5h5d7h78hfh9fh5ddh5bch658hffh655h96h579h5bh96h59bh595h66dh5b8h57bh545h68bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7sclf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-55d64677cc-lw8j7_openstack(5150cd00-34c3-40d8-bacd-0c9858fbab6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:38:00 crc kubenswrapper[4897]: E0228 13:38:00.087569 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-55d64677cc-lw8j7" podUID="5150cd00-34c3-40d8-bacd-0c9858fbab6b" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.142383 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538098-9rtv9"] Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.144343 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.152679 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538098-9rtv9"] Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.161333 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.183791 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-combined-ca-bundle\") pod \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.183882 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-scripts\") pod \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.183925 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqxc7\" (UniqueName: \"kubernetes.io/projected/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-kube-api-access-zqxc7\") pod \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.183962 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.183990 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-logs\") pod \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.184031 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-config-data\") pod \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.184061 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-httpd-run\") pod \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.184110 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-public-tls-certs\") pod \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\" (UID: \"1f964b88-06a6-4ed0-9e1a-6a7338fba9be\") " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.184485 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mzm\" (UniqueName: \"kubernetes.io/projected/bbeef463-3901-42c4-81ed-d97e793fb8b5-kube-api-access-v6mzm\") pod \"auto-csr-approver-29538098-9rtv9\" (UID: \"bbeef463-3901-42c4-81ed-d97e793fb8b5\") " pod="openshift-infra/auto-csr-approver-29538098-9rtv9" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.188665 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f964b88-06a6-4ed0-9e1a-6a7338fba9be" (UID: "1f964b88-06a6-4ed0-9e1a-6a7338fba9be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.188944 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-logs" (OuterVolumeSpecName: "logs") pod "1f964b88-06a6-4ed0-9e1a-6a7338fba9be" (UID: "1f964b88-06a6-4ed0-9e1a-6a7338fba9be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.191968 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-scripts" (OuterVolumeSpecName: "scripts") pod "1f964b88-06a6-4ed0-9e1a-6a7338fba9be" (UID: "1f964b88-06a6-4ed0-9e1a-6a7338fba9be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.209868 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-kube-api-access-zqxc7" (OuterVolumeSpecName: "kube-api-access-zqxc7") pod "1f964b88-06a6-4ed0-9e1a-6a7338fba9be" (UID: "1f964b88-06a6-4ed0-9e1a-6a7338fba9be"). InnerVolumeSpecName "kube-api-access-zqxc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.209943 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "1f964b88-06a6-4ed0-9e1a-6a7338fba9be" (UID: "1f964b88-06a6-4ed0-9e1a-6a7338fba9be"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.254021 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f964b88-06a6-4ed0-9e1a-6a7338fba9be" (UID: "1f964b88-06a6-4ed0-9e1a-6a7338fba9be"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.255206 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f964b88-06a6-4ed0-9e1a-6a7338fba9be" (UID: "1f964b88-06a6-4ed0-9e1a-6a7338fba9be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.286094 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mzm\" (UniqueName: \"kubernetes.io/projected/bbeef463-3901-42c4-81ed-d97e793fb8b5-kube-api-access-v6mzm\") pod \"auto-csr-approver-29538098-9rtv9\" (UID: \"bbeef463-3901-42c4-81ed-d97e793fb8b5\") " pod="openshift-infra/auto-csr-approver-29538098-9rtv9" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.286194 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.286208 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.286218 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqxc7\" (UniqueName: \"kubernetes.io/projected/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-kube-api-access-zqxc7\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.286237 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.286246 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.286254 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.286262 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.298972 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-config-data" (OuterVolumeSpecName: "config-data") pod "1f964b88-06a6-4ed0-9e1a-6a7338fba9be" (UID: "1f964b88-06a6-4ed0-9e1a-6a7338fba9be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.303628 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mzm\" (UniqueName: \"kubernetes.io/projected/bbeef463-3901-42c4-81ed-d97e793fb8b5-kube-api-access-v6mzm\") pod \"auto-csr-approver-29538098-9rtv9\" (UID: \"bbeef463-3901-42c4-81ed-d97e793fb8b5\") " pod="openshift-infra/auto-csr-approver-29538098-9rtv9" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.307176 4897 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.388013 4897 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.388057 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f964b88-06a6-4ed0-9e1a-6a7338fba9be-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:00 crc kubenswrapper[4897]: I0228 13:38:00.496471 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.071612 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f964b88-06a6-4ed0-9e1a-6a7338fba9be","Type":"ContainerDied","Data":"2a12816c481c15948fdb85d5966c637c6b9280dec3df86624234b6f2854224e7"} Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.071931 4897 scope.go:117] "RemoveContainer" containerID="41b1d74b6f2ba97548e6bafb2c5bdc4dce7eb8a162527b84976c63eded1728b7" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.071773 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: E0228 13:38:01.072805 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-7vjm5" podUID="5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.147764 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.157791 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.169525 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:38:01 crc kubenswrapper[4897]: E0228 13:38:01.169885 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerName="glance-httpd" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.169901 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerName="glance-httpd" Feb 28 13:38:01 crc kubenswrapper[4897]: E0228 13:38:01.169913 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerName="glance-log" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.169919 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerName="glance-log" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.170087 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerName="glance-log" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.170109 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" containerName="glance-httpd" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.171056 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.173568 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.173676 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.193423 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.302944 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjr2\" (UniqueName: \"kubernetes.io/projected/9910b644-86b9-44e7-856e-4fbaf1d1a740-kube-api-access-xsjr2\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.303207 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-scripts\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.303430 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-config-data\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.303522 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.303590 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.303719 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-logs\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.303871 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.303978 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.406923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-scripts\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.406989 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-config-data\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.407012 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.407041 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.407081 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-logs\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.407598 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-logs\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.407687 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.407757 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.411669 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.411734 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.411835 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjr2\" (UniqueName: \"kubernetes.io/projected/9910b644-86b9-44e7-856e-4fbaf1d1a740-kube-api-access-xsjr2\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.413858 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-scripts\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.414025 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-config-data\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.414140 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.419936 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.430254 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjr2\" (UniqueName: \"kubernetes.io/projected/9910b644-86b9-44e7-856e-4fbaf1d1a740-kube-api-access-xsjr2\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.446274 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " pod="openstack/glance-default-external-api-0" Feb 28 13:38:01 crc kubenswrapper[4897]: I0228 13:38:01.501753 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:38:02 crc kubenswrapper[4897]: I0228 13:38:02.470099 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f964b88-06a6-4ed0-9e1a-6a7338fba9be" path="/var/lib/kubelet/pods/1f964b88-06a6-4ed0-9e1a-6a7338fba9be/volumes" Feb 28 13:38:07 crc kubenswrapper[4897]: I0228 13:38:07.778005 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Feb 28 13:38:09 crc kubenswrapper[4897]: E0228 13:38:09.296699 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.885653 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.893875 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.916975 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-nb\") pod \"4fdf7502-e691-4668-86f9-256befb8cb69\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917061 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-config-data\") pod \"8b379adc-1a39-4972-80a3-74161c42728a\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917111 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-fernet-keys\") pod \"8b379adc-1a39-4972-80a3-74161c42728a\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917160 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-svc\") pod \"4fdf7502-e691-4668-86f9-256befb8cb69\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917526 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cnxz\" (UniqueName: \"kubernetes.io/projected/4fdf7502-e691-4668-86f9-256befb8cb69-kube-api-access-6cnxz\") pod \"4fdf7502-e691-4668-86f9-256befb8cb69\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917612 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-credential-keys\") pod \"8b379adc-1a39-4972-80a3-74161c42728a\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917659 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-swift-storage-0\") pod \"4fdf7502-e691-4668-86f9-256befb8cb69\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917721 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-combined-ca-bundle\") pod \"8b379adc-1a39-4972-80a3-74161c42728a\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917767 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-scripts\") pod \"8b379adc-1a39-4972-80a3-74161c42728a\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917869 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-config\") pod \"4fdf7502-e691-4668-86f9-256befb8cb69\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.917903 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-sb\") pod \"4fdf7502-e691-4668-86f9-256befb8cb69\" (UID: \"4fdf7502-e691-4668-86f9-256befb8cb69\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.918026 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nckrl\" (UniqueName: \"kubernetes.io/projected/8b379adc-1a39-4972-80a3-74161c42728a-kube-api-access-nckrl\") pod \"8b379adc-1a39-4972-80a3-74161c42728a\" (UID: \"8b379adc-1a39-4972-80a3-74161c42728a\") " Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.925259 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b379adc-1a39-4972-80a3-74161c42728a-kube-api-access-nckrl" (OuterVolumeSpecName: "kube-api-access-nckrl") pod "8b379adc-1a39-4972-80a3-74161c42728a" (UID: "8b379adc-1a39-4972-80a3-74161c42728a"). InnerVolumeSpecName "kube-api-access-nckrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.955389 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-scripts" (OuterVolumeSpecName: "scripts") pod "8b379adc-1a39-4972-80a3-74161c42728a" (UID: "8b379adc-1a39-4972-80a3-74161c42728a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.960908 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8b379adc-1a39-4972-80a3-74161c42728a" (UID: "8b379adc-1a39-4972-80a3-74161c42728a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.960980 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdf7502-e691-4668-86f9-256befb8cb69-kube-api-access-6cnxz" (OuterVolumeSpecName: "kube-api-access-6cnxz") pod "4fdf7502-e691-4668-86f9-256befb8cb69" (UID: "4fdf7502-e691-4668-86f9-256befb8cb69"). InnerVolumeSpecName "kube-api-access-6cnxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.965641 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-config-data" (OuterVolumeSpecName: "config-data") pod "8b379adc-1a39-4972-80a3-74161c42728a" (UID: "8b379adc-1a39-4972-80a3-74161c42728a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.966698 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8b379adc-1a39-4972-80a3-74161c42728a" (UID: "8b379adc-1a39-4972-80a3-74161c42728a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.979683 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b379adc-1a39-4972-80a3-74161c42728a" (UID: "8b379adc-1a39-4972-80a3-74161c42728a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:10 crc kubenswrapper[4897]: I0228 13:38:10.990677 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4fdf7502-e691-4668-86f9-256befb8cb69" (UID: "4fdf7502-e691-4668-86f9-256befb8cb69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.004149 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-config" (OuterVolumeSpecName: "config") pod "4fdf7502-e691-4668-86f9-256befb8cb69" (UID: "4fdf7502-e691-4668-86f9-256befb8cb69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.016629 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4fdf7502-e691-4668-86f9-256befb8cb69" (UID: "4fdf7502-e691-4668-86f9-256befb8cb69"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021386 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021423 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nckrl\" (UniqueName: \"kubernetes.io/projected/8b379adc-1a39-4972-80a3-74161c42728a-kube-api-access-nckrl\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021436 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021450 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021460 4897 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021470 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cnxz\" (UniqueName: \"kubernetes.io/projected/4fdf7502-e691-4668-86f9-256befb8cb69-kube-api-access-6cnxz\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021507 4897 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021522 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021532 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.021541 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b379adc-1a39-4972-80a3-74161c42728a-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.023913 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4fdf7502-e691-4668-86f9-256befb8cb69" (UID: "4fdf7502-e691-4668-86f9-256befb8cb69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.031877 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4fdf7502-e691-4668-86f9-256befb8cb69" (UID: "4fdf7502-e691-4668-86f9-256befb8cb69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.123376 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.123417 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fdf7502-e691-4668-86f9-256befb8cb69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.197480 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nlm9z" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.197491 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nlm9z" event={"ID":"8b379adc-1a39-4972-80a3-74161c42728a","Type":"ContainerDied","Data":"dcfff36f7654594dbf9a861cc7e377967e8f8cfb899c596987c944c244b989c5"} Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.197667 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfff36f7654594dbf9a861cc7e377967e8f8cfb899c596987c944c244b989c5" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.199770 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" event={"ID":"4fdf7502-e691-4668-86f9-256befb8cb69","Type":"ContainerDied","Data":"d83b2808281c5d5dc6dc1459ca9ec92511a575b12931bdfbb7a2f8146da3d0d1"} Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.199852 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.236987 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57979d558f-jkqtc"] Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.247938 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57979d558f-jkqtc"] Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.971451 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nlm9z"] Feb 28 13:38:11 crc kubenswrapper[4897]: I0228 13:38:11.984105 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nlm9z"] Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.076244 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9lkmb"] Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.076684 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="init" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.076706 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="init" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.076737 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="dnsmasq-dns" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.076745 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="dnsmasq-dns" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.076771 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b379adc-1a39-4972-80a3-74161c42728a" containerName="keystone-bootstrap" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.076780 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b379adc-1a39-4972-80a3-74161c42728a" containerName="keystone-bootstrap" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.077022 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b379adc-1a39-4972-80a3-74161c42728a" containerName="keystone-bootstrap" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.077051 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="dnsmasq-dns" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.077779 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.080776 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.080814 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.080838 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxw9x" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.081115 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.080837 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.089663 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9lkmb"] Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.149512 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dg9v\" (UniqueName: \"kubernetes.io/projected/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-kube-api-access-7dg9v\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.149642 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-combined-ca-bundle\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.149714 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-credential-keys\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.149775 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-config-data\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.149802 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-scripts\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.149867 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-fernet-keys\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.196307 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.209052 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55d64677cc-lw8j7" event={"ID":"5150cd00-34c3-40d8-bacd-0c9858fbab6b","Type":"ContainerDied","Data":"fe2c29830940cebf630228cc32eb2334556062b7e9acf9f7bbea78834cbe8c24"} Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.209169 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55d64677cc-lw8j7" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.224852 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.224906 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.225051 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5752j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-h59fj_openstack(bd9edcf1-516a-46a6-a77b-5061505a58d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.226168 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-h59fj" podUID="bd9edcf1-516a-46a6-a77b-5061505a58d7" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251050 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sclf\" (UniqueName: \"kubernetes.io/projected/5150cd00-34c3-40d8-bacd-0c9858fbab6b-kube-api-access-7sclf\") pod \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251123 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-scripts\") pod \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251162 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-config-data\") pod \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251193 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5150cd00-34c3-40d8-bacd-0c9858fbab6b-logs\") pod \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251270 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5150cd00-34c3-40d8-bacd-0c9858fbab6b-horizon-secret-key\") pod \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\" (UID: \"5150cd00-34c3-40d8-bacd-0c9858fbab6b\") " Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251557 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dg9v\" (UniqueName: \"kubernetes.io/projected/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-kube-api-access-7dg9v\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251624 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-combined-ca-bundle\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251672 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-credential-keys\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251712 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-config-data\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251735 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-scripts\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.251758 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-fernet-keys\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.252807 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5150cd00-34c3-40d8-bacd-0c9858fbab6b-logs" (OuterVolumeSpecName: "logs") pod "5150cd00-34c3-40d8-bacd-0c9858fbab6b" (UID: "5150cd00-34c3-40d8-bacd-0c9858fbab6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.253215 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-scripts" (OuterVolumeSpecName: "scripts") pod "5150cd00-34c3-40d8-bacd-0c9858fbab6b" (UID: "5150cd00-34c3-40d8-bacd-0c9858fbab6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.253811 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-config-data" (OuterVolumeSpecName: "config-data") pod "5150cd00-34c3-40d8-bacd-0c9858fbab6b" (UID: "5150cd00-34c3-40d8-bacd-0c9858fbab6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.258095 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5150cd00-34c3-40d8-bacd-0c9858fbab6b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5150cd00-34c3-40d8-bacd-0c9858fbab6b" (UID: "5150cd00-34c3-40d8-bacd-0c9858fbab6b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.258398 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-combined-ca-bundle\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.259155 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5150cd00-34c3-40d8-bacd-0c9858fbab6b-kube-api-access-7sclf" (OuterVolumeSpecName: "kube-api-access-7sclf") pod "5150cd00-34c3-40d8-bacd-0c9858fbab6b" (UID: "5150cd00-34c3-40d8-bacd-0c9858fbab6b"). InnerVolumeSpecName "kube-api-access-7sclf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.259230 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-credential-keys\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.259285 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-config-data\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.259858 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-scripts\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.273682 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dg9v\" (UniqueName: \"kubernetes.io/projected/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-kube-api-access-7dg9v\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.276267 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-fernet-keys\") pod \"keystone-bootstrap-9lkmb\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.353775 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.354084 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5150cd00-34c3-40d8-bacd-0c9858fbab6b-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.354100 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5150cd00-34c3-40d8-bacd-0c9858fbab6b-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.354114 4897 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5150cd00-34c3-40d8-bacd-0c9858fbab6b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.354128 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sclf\" (UniqueName: \"kubernetes.io/projected/5150cd00-34c3-40d8-bacd-0c9858fbab6b-kube-api-access-7sclf\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.468553 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" path="/var/lib/kubelet/pods/4fdf7502-e691-4668-86f9-256befb8cb69/volumes" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.469249 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b379adc-1a39-4972-80a3-74161c42728a" path="/var/lib/kubelet/pods/8b379adc-1a39-4972-80a3-74161c42728a/volumes" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.493897 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.591263 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55d64677cc-lw8j7"] Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.599009 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55d64677cc-lw8j7"] Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.779389 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57979d558f-jkqtc" podUID="4fdf7502-e691-4668-86f9-256befb8cb69" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.958882 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.959015 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 13:38:12 crc kubenswrapper[4897]: E0228 13:38:12.960283 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/prometheus-rhel9@sha256=b3c4bd9e6b46c2065b376c6143facb68f7d37997214f5cad5762b2f5e4eca201/signature-4: status 500 (Internal Server Error)\", failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:38:12 crc kubenswrapper[4897]: I0228 13:38:12.977453 4897 scope.go:117] "RemoveContainer" containerID="fbfe90ad209b1c03b72e5dcddcd6a857e0be775fdb611338217d0f9b18b36e0e" Feb 28 13:38:13 crc kubenswrapper[4897]: E0228 13:38:13.221704 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-h59fj" podUID="bd9edcf1-516a-46a6-a77b-5061505a58d7" Feb 28 13:38:13 crc kubenswrapper[4897]: E0228 13:38:13.461188 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 28 13:38:13 crc kubenswrapper[4897]: E0228 13:38:13.461396 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 28 13:38:13 crc kubenswrapper[4897]: E0228 13:38:13.461495 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.80:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dv9m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-fgtj6_openstack(661850a9-a877-476b-b3ae-a6c6f3b3676a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 13:38:13 crc kubenswrapper[4897]: E0228 13:38:13.463371 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-fgtj6" podUID="661850a9-a877-476b-b3ae-a6c6f3b3676a" Feb 28 13:38:13 crc kubenswrapper[4897]: I0228 13:38:13.586121 4897 scope.go:117] "RemoveContainer" containerID="d349c1a341a6d70e2d26d824328e437b41aa4a3630dbca9b47f87c1b38868b4a" Feb 28 13:38:13 crc kubenswrapper[4897]: I0228 13:38:13.717024 4897 scope.go:117] "RemoveContainer" containerID="093bd17ca18770d2b42652028edd6527220331195fb0e08323a644604060b549" Feb 28 13:38:13 crc kubenswrapper[4897]: I0228 13:38:13.970538 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fb67c45d-s75qr"] Feb 28 13:38:14 crc kubenswrapper[4897]: E0228 13:38:14.044552 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/prometheus-rhel9@sha256=b3c4bd9e6b46c2065b376c6143facb68f7d37997214f5cad5762b2f5e4eca201/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 28 13:38:14 crc kubenswrapper[4897]: E0228 13:38:14.044942 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/prometheus-rhel9@sha256=b3c4bd9e6b46c2065b376c6143facb68f7d37997214f5cad5762b2f5e4eca201/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.046379 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7df779db98-ljwk8"] Feb 28 13:38:14 crc kubenswrapper[4897]: E0228 13:38:14.050014 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/prometheus-rhel9@sha256=b3c4bd9e6b46c2065b376c6143facb68f7d37997214f5cad5762b2f5e4eca201/signature-4: status 500 (Internal Server Error)\", failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"]" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:38:14 crc kubenswrapper[4897]: W0228 13:38:14.056588 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0db6a4f_19e4_488c_bc45_9619565bdf57.slice/crio-7e52af5b460b6efa2697db2e7c2857a8b4c61ca3895aab5fa63c185a03f4379b WatchSource:0}: Error finding container 7e52af5b460b6efa2697db2e7c2857a8b4c61ca3895aab5fa63c185a03f4379b: Status 404 returned error can't find the container with id 7e52af5b460b6efa2697db2e7c2857a8b4c61ca3895aab5fa63c185a03f4379b Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.152188 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.164746 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.253661 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerStarted","Data":"3d6214c4385cf6bd8770f8f0a8389c3a3516af10f5656ab02edaaa99dba58c77"} Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.255711 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85bccd86cc-mcgvg" event={"ID":"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81","Type":"ContainerStarted","Data":"9721bc594ef84e9aee28c0e39b2571bc5828856084c9ee6a6900df961e383587"} Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.257834 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" event={"ID":"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b","Type":"ContainerStarted","Data":"f64c40bb8bb0bead2f13fc078c8aa3c8558c96702c83bab99d1dcf0f10f6f277"} Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.276436 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7aff986-c99b-43a7-afc8-b9194ce17385","Type":"ContainerStarted","Data":"b8c42906a6f7073722e82fc7f395ccaac5c92a998f9d6922a6d37271f20323d1"} Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.282296 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb67c45d-s75qr" event={"ID":"6102738c-6c77-48c6-87e1-67853cf8ce43","Type":"ContainerStarted","Data":"e9366943af5777da68ff93613c13d1749e4fbc9d264075295e943ba0f739600f"} Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.283731 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"2ff09d8c-69de-4c11-8e94-90fce8f42387","Type":"ContainerStarted","Data":"81a1812b7f7a5859516c7569aa842a1e401880fece7cec8b5a01cacb47701c80"} Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.287776 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df779db98-ljwk8" event={"ID":"e0db6a4f-19e4-488c-bc45-9619565bdf57","Type":"ContainerStarted","Data":"7e52af5b460b6efa2697db2e7c2857a8b4c61ca3895aab5fa63c185a03f4379b"} Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.299897 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bc5769f5-kt85c" event={"ID":"015cae83-dbd9-4d4b-84f6-e90aa405acf2","Type":"ContainerStarted","Data":"c99617d67e5f0d9c56affd27bad0e543044ee0bfb1d0e4b8f66e6706e8ea3ea1"} Feb 28 13:38:14 crc kubenswrapper[4897]: E0228 13:38:14.300574 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-fgtj6" podUID="661850a9-a877-476b-b3ae-a6c6f3b3676a" Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.308639 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" podStartSLOduration=15.295927807 podStartE2EDuration="2m14.308625566s" podCreationTimestamp="2026-02-28 13:36:00 +0000 UTC" firstStartedPulling="2026-02-28 13:36:14.58441718 +0000 UTC m=+1188.826737837" lastFinishedPulling="2026-02-28 13:38:13.597114939 +0000 UTC m=+1307.839435596" observedRunningTime="2026-02-28 13:38:14.296735781 +0000 UTC m=+1308.539056438" watchObservedRunningTime="2026-02-28 13:38:14.308625566 +0000 UTC m=+1308.550946223" Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.332354 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538098-9rtv9"] Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.347431 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9lkmb"] Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.358737 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:38:14 crc kubenswrapper[4897]: W0228 13:38:14.383881 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea8b2284_fafa_4fca_b367_9cffa5f5a201.slice/crio-5939462f6ecca18c2b701afa47d112f5ae5630f4889b31ef9076ebedcc429c19 WatchSource:0}: Error finding container 5939462f6ecca18c2b701afa47d112f5ae5630f4889b31ef9076ebedcc429c19: Status 404 returned error can't find the container with id 5939462f6ecca18c2b701afa47d112f5ae5630f4889b31ef9076ebedcc429c19 Feb 28 13:38:14 crc kubenswrapper[4897]: W0228 13:38:14.390503 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbeef463_3901_42c4_81ed_d97e793fb8b5.slice/crio-3a3f3f17049f1047a96375f5a5a92a47a942ea51f6b0772ebc77fe8b598cefd2 WatchSource:0}: Error finding container 3a3f3f17049f1047a96375f5a5a92a47a942ea51f6b0772ebc77fe8b598cefd2: Status 404 returned error can't find the container with id 3a3f3f17049f1047a96375f5a5a92a47a942ea51f6b0772ebc77fe8b598cefd2 Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.436122 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:38:14 crc kubenswrapper[4897]: W0228 13:38:14.444448 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9910b644_86b9_44e7_856e_4fbaf1d1a740.slice/crio-76f4b02a367da1f9a2019e7ae27423da042b04a1733e069007549ccc2c3b6001 WatchSource:0}: Error finding container 76f4b02a367da1f9a2019e7ae27423da042b04a1733e069007549ccc2c3b6001: Status 404 returned error can't find the container with id 76f4b02a367da1f9a2019e7ae27423da042b04a1733e069007549ccc2c3b6001 Feb 28 13:38:14 crc kubenswrapper[4897]: I0228 13:38:14.472016 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5150cd00-34c3-40d8-bacd-0c9858fbab6b" path="/var/lib/kubelet/pods/5150cd00-34c3-40d8-bacd-0c9858fbab6b/volumes" Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.315934 4897 generic.go:334] "Generic (PLEG): container finished" podID="99d7bd5a-52d0-4a8f-bd1d-542a957d815f" containerID="316f4ff8d86c10b2247ca63060e60068ea860f8805bf6b8a025f41581a628fb2" exitCode=0 Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.317669 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qc9bp" event={"ID":"99d7bd5a-52d0-4a8f-bd1d-542a957d815f","Type":"ContainerDied","Data":"316f4ff8d86c10b2247ca63060e60068ea860f8805bf6b8a025f41581a628fb2"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.322113 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85bccd86cc-mcgvg" event={"ID":"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81","Type":"ContainerStarted","Data":"1a5891b0efb61498c194c201cf1403fc4c3055b8cd8e3b3452b8a2cc45cd0d86"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.322294 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-85bccd86cc-mcgvg" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerName="horizon-log" containerID="cri-o://9721bc594ef84e9aee28c0e39b2571bc5828856084c9ee6a6900df961e383587" gracePeriod=30 Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.322353 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-85bccd86cc-mcgvg" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerName="horizon" containerID="cri-o://1a5891b0efb61498c194c201cf1403fc4c3055b8cd8e3b3452b8a2cc45cd0d86" gracePeriod=30 Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.324284 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9910b644-86b9-44e7-856e-4fbaf1d1a740","Type":"ContainerStarted","Data":"d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.324383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9910b644-86b9-44e7-856e-4fbaf1d1a740","Type":"ContainerStarted","Data":"76f4b02a367da1f9a2019e7ae27423da042b04a1733e069007549ccc2c3b6001"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.325923 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ea8b2284-fafa-4fca-b367-9cffa5f5a201","Type":"ContainerStarted","Data":"2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.325988 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ea8b2284-fafa-4fca-b367-9cffa5f5a201","Type":"ContainerStarted","Data":"5939462f6ecca18c2b701afa47d112f5ae5630f4889b31ef9076ebedcc429c19"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.332048 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c199ad99-a479-4d3f-a78f-fce1c2889070","Type":"ContainerStarted","Data":"83f114246c751228ccae4606eacd654c641aace53762f92c046ad5fc99c26010"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.332120 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerName="glance-log" containerID="cri-o://f21848a10ad018f1f2a68714c51863cc87a58caa808c5be74d9984f1ab7e3383" gracePeriod=30 Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.332148 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerName="glance-httpd" containerID="cri-o://83f114246c751228ccae4606eacd654c641aace53762f92c046ad5fc99c26010" gracePeriod=30 Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.334405 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df779db98-ljwk8" event={"ID":"e0db6a4f-19e4-488c-bc45-9619565bdf57","Type":"ContainerStarted","Data":"e0d05620112a60935ef03285757f963106620643877522082c992181db994a8b"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.345411 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" event={"ID":"bbeef463-3901-42c4-81ed-d97e793fb8b5","Type":"ContainerStarted","Data":"3a3f3f17049f1047a96375f5a5a92a47a942ea51f6b0772ebc77fe8b598cefd2"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.357181 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-85bccd86cc-mcgvg" podStartSLOduration=3.454030457 podStartE2EDuration="29.3571601s" podCreationTimestamp="2026-02-28 13:37:46 +0000 UTC" firstStartedPulling="2026-02-28 13:37:47.790459484 +0000 UTC m=+1282.032780141" lastFinishedPulling="2026-02-28 13:38:13.693589127 +0000 UTC m=+1307.935909784" observedRunningTime="2026-02-28 13:38:15.355563089 +0000 UTC m=+1309.597883746" watchObservedRunningTime="2026-02-28 13:38:15.3571601 +0000 UTC m=+1309.599480757" Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.363794 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bc5769f5-kt85c" event={"ID":"015cae83-dbd9-4d4b-84f6-e90aa405acf2","Type":"ContainerStarted","Data":"6dcf84bd9d647e4d07abd6c273a6af57add8f9bad79dad135081efbd28793f8e"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.364008 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68bc5769f5-kt85c" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerName="horizon" containerID="cri-o://6dcf84bd9d647e4d07abd6c273a6af57add8f9bad79dad135081efbd28793f8e" gracePeriod=30 Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.364019 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68bc5769f5-kt85c" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerName="horizon-log" containerID="cri-o://c99617d67e5f0d9c56affd27bad0e543044ee0bfb1d0e4b8f66e6706e8ea3ea1" gracePeriod=30 Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.367114 4897 generic.go:334] "Generic (PLEG): container finished" podID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" containerID="f64c40bb8bb0bead2f13fc078c8aa3c8558c96702c83bab99d1dcf0f10f6f277" exitCode=0 Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.367264 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" event={"ID":"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b","Type":"ContainerDied","Data":"f64c40bb8bb0bead2f13fc078c8aa3c8558c96702c83bab99d1dcf0f10f6f277"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.370853 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9lkmb" event={"ID":"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265","Type":"ContainerStarted","Data":"1f349f8a5bfb5ee92815ca4f6ca7875636abfa533848da899aafd240971f601a"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.370891 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9lkmb" event={"ID":"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265","Type":"ContainerStarted","Data":"700601762ef78c9a67e22ab661e5c345971e43b67a67b2bcc920d4d85487ffc8"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.382752 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb67c45d-s75qr" event={"ID":"6102738c-6c77-48c6-87e1-67853cf8ce43","Type":"ContainerStarted","Data":"ac909c528d6a37a6661858d1af31fcfccc18f72e2110aae3aef663642f29b3a7"} Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.385537 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=33.385520449 podStartE2EDuration="33.385520449s" podCreationTimestamp="2026-02-28 13:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:15.379057193 +0000 UTC m=+1309.621377870" watchObservedRunningTime="2026-02-28 13:38:15.385520449 +0000 UTC m=+1309.627841106" Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.422924 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9lkmb" podStartSLOduration=3.422901729 podStartE2EDuration="3.422901729s" podCreationTimestamp="2026-02-28 13:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:15.416430793 +0000 UTC m=+1309.658751440" watchObservedRunningTime="2026-02-28 13:38:15.422901729 +0000 UTC m=+1309.665222386" Feb 28 13:38:15 crc kubenswrapper[4897]: I0228 13:38:15.432871 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68bc5769f5-kt85c" podStartSLOduration=4.521571584 podStartE2EDuration="33.432847314s" podCreationTimestamp="2026-02-28 13:37:42 +0000 UTC" firstStartedPulling="2026-02-28 13:37:44.047565043 +0000 UTC m=+1278.289885700" lastFinishedPulling="2026-02-28 13:38:12.958840773 +0000 UTC m=+1307.201161430" observedRunningTime="2026-02-28 13:38:15.400021391 +0000 UTC m=+1309.642342068" watchObservedRunningTime="2026-02-28 13:38:15.432847314 +0000 UTC m=+1309.675167971" Feb 28 13:38:16 crc kubenswrapper[4897]: I0228 13:38:16.400618 4897 generic.go:334] "Generic (PLEG): container finished" podID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerID="83f114246c751228ccae4606eacd654c641aace53762f92c046ad5fc99c26010" exitCode=0 Feb 28 13:38:16 crc kubenswrapper[4897]: I0228 13:38:16.401908 4897 generic.go:334] "Generic (PLEG): container finished" podID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerID="f21848a10ad018f1f2a68714c51863cc87a58caa808c5be74d9984f1ab7e3383" exitCode=143 Feb 28 13:38:16 crc kubenswrapper[4897]: I0228 13:38:16.400675 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c199ad99-a479-4d3f-a78f-fce1c2889070","Type":"ContainerDied","Data":"83f114246c751228ccae4606eacd654c641aace53762f92c046ad5fc99c26010"} Feb 28 13:38:16 crc kubenswrapper[4897]: I0228 13:38:16.402157 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c199ad99-a479-4d3f-a78f-fce1c2889070","Type":"ContainerDied","Data":"f21848a10ad018f1f2a68714c51863cc87a58caa808c5be74d9984f1ab7e3383"} Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.031163 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.069177 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.079413 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.122920 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-combined-ca-bundle\") pod \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.123032 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqr7p\" (UniqueName: \"kubernetes.io/projected/6e94c0b2-21a6-496c-8188-dfcaf0d66b2b-kube-api-access-rqr7p\") pod \"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b\" (UID: \"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.123120 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnscv\" (UniqueName: \"kubernetes.io/projected/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-kube-api-access-cnscv\") pod \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.123177 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-config\") pod \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\" (UID: \"99d7bd5a-52d0-4a8f-bd1d-542a957d815f\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.134163 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e94c0b2-21a6-496c-8188-dfcaf0d66b2b-kube-api-access-rqr7p" (OuterVolumeSpecName: "kube-api-access-rqr7p") pod "6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" (UID: "6e94c0b2-21a6-496c-8188-dfcaf0d66b2b"). InnerVolumeSpecName "kube-api-access-rqr7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.139445 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-kube-api-access-cnscv" (OuterVolumeSpecName: "kube-api-access-cnscv") pod "99d7bd5a-52d0-4a8f-bd1d-542a957d815f" (UID: "99d7bd5a-52d0-4a8f-bd1d-542a957d815f"). InnerVolumeSpecName "kube-api-access-cnscv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.173958 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-config" (OuterVolumeSpecName: "config") pod "99d7bd5a-52d0-4a8f-bd1d-542a957d815f" (UID: "99d7bd5a-52d0-4a8f-bd1d-542a957d815f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.177084 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99d7bd5a-52d0-4a8f-bd1d-542a957d815f" (UID: "99d7bd5a-52d0-4a8f-bd1d-542a957d815f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.226109 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqr7p\" (UniqueName: \"kubernetes.io/projected/6e94c0b2-21a6-496c-8188-dfcaf0d66b2b-kube-api-access-rqr7p\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.226141 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnscv\" (UniqueName: \"kubernetes.io/projected/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-kube-api-access-cnscv\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.226154 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.226165 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d7bd5a-52d0-4a8f-bd1d-542a957d815f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.229948 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.326701 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqpz8\" (UniqueName: \"kubernetes.io/projected/c199ad99-a479-4d3f-a78f-fce1c2889070-kube-api-access-tqpz8\") pod \"c199ad99-a479-4d3f-a78f-fce1c2889070\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.326741 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-logs\") pod \"c199ad99-a479-4d3f-a78f-fce1c2889070\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.326775 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-config-data\") pod \"c199ad99-a479-4d3f-a78f-fce1c2889070\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.326852 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-scripts\") pod \"c199ad99-a479-4d3f-a78f-fce1c2889070\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.326911 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-internal-tls-certs\") pod \"c199ad99-a479-4d3f-a78f-fce1c2889070\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.326930 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"c199ad99-a479-4d3f-a78f-fce1c2889070\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.326977 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-httpd-run\") pod \"c199ad99-a479-4d3f-a78f-fce1c2889070\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.326998 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-combined-ca-bundle\") pod \"c199ad99-a479-4d3f-a78f-fce1c2889070\" (UID: \"c199ad99-a479-4d3f-a78f-fce1c2889070\") " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.327933 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c199ad99-a479-4d3f-a78f-fce1c2889070" (UID: "c199ad99-a479-4d3f-a78f-fce1c2889070"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.331955 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-logs" (OuterVolumeSpecName: "logs") pod "c199ad99-a479-4d3f-a78f-fce1c2889070" (UID: "c199ad99-a479-4d3f-a78f-fce1c2889070"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.338070 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c199ad99-a479-4d3f-a78f-fce1c2889070-kube-api-access-tqpz8" (OuterVolumeSpecName: "kube-api-access-tqpz8") pod "c199ad99-a479-4d3f-a78f-fce1c2889070" (UID: "c199ad99-a479-4d3f-a78f-fce1c2889070"). InnerVolumeSpecName "kube-api-access-tqpz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.344454 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-scripts" (OuterVolumeSpecName: "scripts") pod "c199ad99-a479-4d3f-a78f-fce1c2889070" (UID: "c199ad99-a479-4d3f-a78f-fce1c2889070"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.347582 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "c199ad99-a479-4d3f-a78f-fce1c2889070" (UID: "c199ad99-a479-4d3f-a78f-fce1c2889070"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.371139 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538090-677fq"] Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.401670 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-config-data" (OuterVolumeSpecName: "config-data") pod "c199ad99-a479-4d3f-a78f-fce1c2889070" (UID: "c199ad99-a479-4d3f-a78f-fce1c2889070"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.425825 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538090-677fq"] Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.429111 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqpz8\" (UniqueName: \"kubernetes.io/projected/c199ad99-a479-4d3f-a78f-fce1c2889070-kube-api-access-tqpz8\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.429131 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.429140 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.429149 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.429176 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.429185 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c199ad99-a479-4d3f-a78f-fce1c2889070-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.437139 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c199ad99-a479-4d3f-a78f-fce1c2889070","Type":"ContainerDied","Data":"47a4e2dfd9934f60fb8f6789b424e99159af60cd73089f745d337c5c4d4f3d4c"} Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.437195 4897 scope.go:117] "RemoveContainer" containerID="83f114246c751228ccae4606eacd654c641aace53762f92c046ad5fc99c26010" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.437361 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.448285 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df779db98-ljwk8" event={"ID":"e0db6a4f-19e4-488c-bc45-9619565bdf57","Type":"ContainerStarted","Data":"6e255553f3ed1e4232ae63cf4d5aeb1055d0dea89f6310f549d732d58d643738"} Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.449877 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c199ad99-a479-4d3f-a78f-fce1c2889070" (UID: "c199ad99-a479-4d3f-a78f-fce1c2889070"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.461440 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c199ad99-a479-4d3f-a78f-fce1c2889070" (UID: "c199ad99-a479-4d3f-a78f-fce1c2889070"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.464927 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ea8b2284-fafa-4fca-b367-9cffa5f5a201","Type":"ContainerStarted","Data":"b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328"} Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.466297 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.467096 4897 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.471372 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.473937 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" event={"ID":"6e94c0b2-21a6-496c-8188-dfcaf0d66b2b","Type":"ContainerDied","Data":"488a60779576cd01ac6884baae1de674651f0f5bf2089ac1b496442c30cb875d"} Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.473958 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538096-ws9qt" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.473968 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488a60779576cd01ac6884baae1de674651f0f5bf2089ac1b496442c30cb875d" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.499079 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb67c45d-s75qr" event={"ID":"6102738c-6c77-48c6-87e1-67853cf8ce43","Type":"ContainerStarted","Data":"2fc3fb7a660268704953fa4bd24b93db1256492df8f5818ec8132b76f2ceb191"} Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.499981 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7df779db98-ljwk8" podStartSLOduration=26.499962855 podStartE2EDuration="26.499962855s" podCreationTimestamp="2026-02-28 13:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:17.477824776 +0000 UTC m=+1311.720145433" watchObservedRunningTime="2026-02-28 13:38:17.499962855 +0000 UTC m=+1311.742283512" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.529597 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qc9bp" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.529791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qc9bp" event={"ID":"99d7bd5a-52d0-4a8f-bd1d-542a957d815f","Type":"ContainerDied","Data":"674673298ab64739a2e39e1fc27af8cd4ce8053084d302ff1d53d768d1621c58"} Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.529821 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="674673298ab64739a2e39e1fc27af8cd4ce8053084d302ff1d53d768d1621c58" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.532664 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.532687 4897 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.532698 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c199ad99-a479-4d3f-a78f-fce1c2889070-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.564964 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=18.564943234 podStartE2EDuration="18.564943234s" podCreationTimestamp="2026-02-28 13:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:17.513837311 +0000 UTC m=+1311.756157968" watchObservedRunningTime="2026-02-28 13:38:17.564943234 +0000 UTC m=+1311.807263891" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.581678 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6fb67c45d-s75qr" podStartSLOduration=26.581662143 podStartE2EDuration="26.581662143s" podCreationTimestamp="2026-02-28 13:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:17.55468823 +0000 UTC m=+1311.797008887" watchObservedRunningTime="2026-02-28 13:38:17.581662143 +0000 UTC m=+1311.823982800" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.609131 4897 scope.go:117] "RemoveContainer" containerID="f21848a10ad018f1f2a68714c51863cc87a58caa808c5be74d9984f1ab7e3383" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.656369 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6ff458d7-pwrsk"] Feb 28 13:38:17 crc kubenswrapper[4897]: E0228 13:38:17.656765 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d7bd5a-52d0-4a8f-bd1d-542a957d815f" containerName="neutron-db-sync" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.656777 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d7bd5a-52d0-4a8f-bd1d-542a957d815f" containerName="neutron-db-sync" Feb 28 13:38:17 crc kubenswrapper[4897]: E0228 13:38:17.656796 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerName="glance-log" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.656802 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerName="glance-log" Feb 28 13:38:17 crc kubenswrapper[4897]: E0228 13:38:17.656812 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerName="glance-httpd" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.656818 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerName="glance-httpd" Feb 28 13:38:17 crc kubenswrapper[4897]: E0228 13:38:17.656830 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" containerName="oc" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.656836 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" containerName="oc" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.657016 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" containerName="oc" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.657036 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerName="glance-log" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.657048 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d7bd5a-52d0-4a8f-bd1d-542a957d815f" containerName="neutron-db-sync" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.657059 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" containerName="glance-httpd" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.658017 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.667502 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6ff458d7-pwrsk"] Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.741627 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.741688 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.741762 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-config\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.741804 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-svc\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.741842 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk6tw\" (UniqueName: \"kubernetes.io/projected/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-kube-api-access-dk6tw\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.741893 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.803784 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-844df98d6-6ncv9"] Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.844700 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.844829 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.844875 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.844959 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-config\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.844994 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-svc\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.845044 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk6tw\" (UniqueName: \"kubernetes.io/projected/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-kube-api-access-dk6tw\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.846302 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.849590 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.851930 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-844df98d6-6ncv9"] Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.852073 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.877732 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-config\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.879990 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.885006 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.885274 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-m6pcq" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.885491 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.887869 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.918360 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk6tw\" (UniqueName: \"kubernetes.io/projected/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-kube-api-access-dk6tw\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.919719 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-svc\") pod \"dnsmasq-dns-5b6ff458d7-pwrsk\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.947479 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-config\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.947835 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tpsg\" (UniqueName: \"kubernetes.io/projected/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-kube-api-access-8tpsg\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.947919 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-combined-ca-bundle\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.947948 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-ovndb-tls-certs\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:17 crc kubenswrapper[4897]: I0228 13:38:17.947975 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-httpd-config\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.015191 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.035944 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.056470 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tpsg\" (UniqueName: \"kubernetes.io/projected/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-kube-api-access-8tpsg\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.056574 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-combined-ca-bundle\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.056616 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-ovndb-tls-certs\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.056637 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-httpd-config\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.056696 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-config\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.075159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-config\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.075636 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-ovndb-tls-certs\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.106137 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-combined-ca-bundle\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.117527 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tpsg\" (UniqueName: \"kubernetes.io/projected/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-kube-api-access-8tpsg\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.131545 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-httpd-config\") pod \"neutron-844df98d6-6ncv9\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.163927 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.183371 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.185016 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.192235 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.192805 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.192985 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.227248 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.261869 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.261943 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.262010 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.262053 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-logs\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.262088 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjmm8\" (UniqueName: \"kubernetes.io/projected/86b347d9-5a82-4e31-9ba3-1e5c82decb50-kube-api-access-kjmm8\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.262112 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.262138 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.262180 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.366569 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.366643 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.366700 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.366764 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.366811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.366852 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.366916 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-logs\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.366960 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjmm8\" (UniqueName: \"kubernetes.io/projected/86b347d9-5a82-4e31-9ba3-1e5c82decb50-kube-api-access-kjmm8\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.367680 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.367715 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.372547 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-logs\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.376068 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.376909 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.382202 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.389141 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.408025 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjmm8\" (UniqueName: \"kubernetes.io/projected/86b347d9-5a82-4e31-9ba3-1e5c82decb50-kube-api-access-kjmm8\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.475073 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292167e9-1fa2-4fda-b4da-f112d69333b9" path="/var/lib/kubelet/pods/292167e9-1fa2-4fda-b4da-f112d69333b9/volumes" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.479031 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c199ad99-a479-4d3f-a78f-fce1c2889070" path="/var/lib/kubelet/pods/c199ad99-a479-4d3f-a78f-fce1c2889070/volumes" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.538250 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.549415 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.645332 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6ff458d7-pwrsk"] Feb 28 13:38:18 crc kubenswrapper[4897]: I0228 13:38:18.888563 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-844df98d6-6ncv9"] Feb 28 13:38:19 crc kubenswrapper[4897]: I0228 13:38:19.667717 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 28 13:38:19 crc kubenswrapper[4897]: I0228 13:38:19.668827 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 28 13:38:19 crc kubenswrapper[4897]: I0228 13:38:19.901048 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f69d796b5-nrscn"] Feb 28 13:38:19 crc kubenswrapper[4897]: I0228 13:38:19.904038 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:19 crc kubenswrapper[4897]: I0228 13:38:19.909651 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 28 13:38:19 crc kubenswrapper[4897]: I0228 13:38:19.909779 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 28 13:38:19 crc kubenswrapper[4897]: I0228 13:38:19.915041 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f69d796b5-nrscn"] Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.099260 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-ovndb-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.099407 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9gl\" (UniqueName: \"kubernetes.io/projected/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-kube-api-access-dc9gl\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.099477 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-httpd-config\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.099513 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-config\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.099540 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-public-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.099567 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-combined-ca-bundle\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.099670 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-internal-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.201256 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-internal-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.201358 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-ovndb-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.201397 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc9gl\" (UniqueName: \"kubernetes.io/projected/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-kube-api-access-dc9gl\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.201454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-httpd-config\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.201492 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-config\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.201518 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-public-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.201554 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-combined-ca-bundle\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.207247 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-ovndb-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.208396 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-config\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.208825 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-internal-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.208959 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-public-tls-certs\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.210003 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-combined-ca-bundle\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.221775 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-httpd-config\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.225717 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc9gl\" (UniqueName: \"kubernetes.io/projected/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-kube-api-access-dc9gl\") pod \"neutron-f69d796b5-nrscn\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.227443 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.597240 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:38:20 crc kubenswrapper[4897]: I0228 13:38:20.709527 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 13:38:21 crc kubenswrapper[4897]: I0228 13:38:21.159135 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.116754 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.117298 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.215929 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.215981 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.332537 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.425920 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f69d796b5-nrscn"] Feb 28 13:38:22 crc kubenswrapper[4897]: W0228 13:38:22.467694 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cd4f6a6_5d8b_4f37_9740_bf96116a7bc5.slice/crio-82ec60af186739b6a58d2b22c114c80d8ee2a0bbf4d170076bb8711d3be64303 WatchSource:0}: Error finding container 82ec60af186739b6a58d2b22c114c80d8ee2a0bbf4d170076bb8711d3be64303: Status 404 returned error can't find the container with id 82ec60af186739b6a58d2b22c114c80d8ee2a0bbf4d170076bb8711d3be64303 Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.634694 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7vjm5" event={"ID":"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6","Type":"ContainerStarted","Data":"35a3970f5e13a727a746265694f0710ca8239257e87518d2964beb9c3efddce0"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.647766 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86b347d9-5a82-4e31-9ba3-1e5c82decb50","Type":"ContainerStarted","Data":"6e7de237873637cac15cbc6809afb0378b361a48df777955256a05be32a9ec9c"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.661191 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844df98d6-6ncv9" event={"ID":"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18","Type":"ContainerStarted","Data":"c04e0114733fac68182304cf39bae4de83471321b325d05b9b29415deac0c99a"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.661244 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844df98d6-6ncv9" event={"ID":"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18","Type":"ContainerStarted","Data":"a3070bc88395caf61e55cc0b48b8ca2fc46355dac9c7552d210e244136fb0270"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.676365 4897 generic.go:334] "Generic (PLEG): container finished" podID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" containerID="79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb" exitCode=0 Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.676496 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" event={"ID":"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229","Type":"ContainerDied","Data":"79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.676569 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" event={"ID":"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229","Type":"ContainerStarted","Data":"1d002e1ba3bec9680784a9894beba297ff838aee2b3039214c799b9cd01fb818"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.690268 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9910b644-86b9-44e7-856e-4fbaf1d1a740","Type":"ContainerStarted","Data":"d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.700714 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"2ff09d8c-69de-4c11-8e94-90fce8f42387","Type":"ContainerStarted","Data":"7904a3f40492adfac8f92568a394efa044ff0272bcfa724e20ae0aa6404e1333"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.714121 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7vjm5" podStartSLOduration=7.410417353 podStartE2EDuration="39.714099804s" podCreationTimestamp="2026-02-28 13:37:43 +0000 UTC" firstStartedPulling="2026-02-28 13:37:44.865390844 +0000 UTC m=+1279.107711501" lastFinishedPulling="2026-02-28 13:38:17.169073295 +0000 UTC m=+1311.411393952" observedRunningTime="2026-02-28 13:38:22.655586291 +0000 UTC m=+1316.897906968" watchObservedRunningTime="2026-02-28 13:38:22.714099804 +0000 UTC m=+1316.956420461" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.719456 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" event={"ID":"bbeef463-3901-42c4-81ed-d97e793fb8b5","Type":"ContainerStarted","Data":"5686f151e991ecb04d64a39f69be073524971284a598c4c811cc8cbfaada4cbb"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.767499 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerStarted","Data":"26a3f9ee7928f0abd19f88b38a2ed57fd3b52dd242cbfbf71f2750617f194561"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.801522 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7aff986-c99b-43a7-afc8-b9194ce17385","Type":"ContainerStarted","Data":"9e34f2e6495c1fe81abdf0d50a40373b2771813b4d6d5371f7966b9865bf9d36"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.817715 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.817694925 podStartE2EDuration="21.817694925s" podCreationTimestamp="2026-02-28 13:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:22.752079049 +0000 UTC m=+1316.994399706" watchObservedRunningTime="2026-02-28 13:38:22.817694925 +0000 UTC m=+1317.060015582" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.828719 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f69d796b5-nrscn" event={"ID":"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5","Type":"ContainerStarted","Data":"82ec60af186739b6a58d2b22c114c80d8ee2a0bbf4d170076bb8711d3be64303"} Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.829558 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=20.821490689 podStartE2EDuration="23.829538849s" podCreationTimestamp="2026-02-28 13:37:59 +0000 UTC" firstStartedPulling="2026-02-28 13:38:14.159391503 +0000 UTC m=+1308.401712160" lastFinishedPulling="2026-02-28 13:38:17.167439663 +0000 UTC m=+1311.409760320" observedRunningTime="2026-02-28 13:38:22.775946642 +0000 UTC m=+1317.018267299" watchObservedRunningTime="2026-02-28 13:38:22.829538849 +0000 UTC m=+1317.071859506" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.843774 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=20.853868661 podStartE2EDuration="23.843754334s" podCreationTimestamp="2026-02-28 13:37:59 +0000 UTC" firstStartedPulling="2026-02-28 13:38:14.178693949 +0000 UTC m=+1308.421014616" lastFinishedPulling="2026-02-28 13:38:17.168579642 +0000 UTC m=+1311.410900289" observedRunningTime="2026-02-28 13:38:22.807059562 +0000 UTC m=+1317.049380219" watchObservedRunningTime="2026-02-28 13:38:22.843754334 +0000 UTC m=+1317.086074991" Feb 28 13:38:22 crc kubenswrapper[4897]: I0228 13:38:22.899024 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" podStartSLOduration=20.137097036 podStartE2EDuration="22.899000603s" podCreationTimestamp="2026-02-28 13:38:00 +0000 UTC" firstStartedPulling="2026-02-28 13:38:14.411195021 +0000 UTC m=+1308.653515688" lastFinishedPulling="2026-02-28 13:38:17.173098598 +0000 UTC m=+1311.415419255" observedRunningTime="2026-02-28 13:38:22.860510745 +0000 UTC m=+1317.102831402" watchObservedRunningTime="2026-02-28 13:38:22.899000603 +0000 UTC m=+1317.141321250" Feb 28 13:38:23 crc kubenswrapper[4897]: I0228 13:38:23.197355 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:38:23 crc kubenswrapper[4897]: I0228 13:38:23.914356 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86b347d9-5a82-4e31-9ba3-1e5c82decb50","Type":"ContainerStarted","Data":"1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338"} Feb 28 13:38:23 crc kubenswrapper[4897]: I0228 13:38:23.939091 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" event={"ID":"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229","Type":"ContainerStarted","Data":"c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0"} Feb 28 13:38:23 crc kubenswrapper[4897]: I0228 13:38:23.939486 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:23 crc kubenswrapper[4897]: I0228 13:38:23.953190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844df98d6-6ncv9" event={"ID":"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18","Type":"ContainerStarted","Data":"0dc24af0caa37581cc0a0f08f82404e8d8c243b3501baf511950cf7edd705dba"} Feb 28 13:38:23 crc kubenswrapper[4897]: I0228 13:38:23.953607 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:23 crc kubenswrapper[4897]: I0228 13:38:23.973709 4897 generic.go:334] "Generic (PLEG): container finished" podID="bbeef463-3901-42c4-81ed-d97e793fb8b5" containerID="5686f151e991ecb04d64a39f69be073524971284a598c4c811cc8cbfaada4cbb" exitCode=0 Feb 28 13:38:23 crc kubenswrapper[4897]: I0228 13:38:23.974104 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" event={"ID":"bbeef463-3901-42c4-81ed-d97e793fb8b5","Type":"ContainerDied","Data":"5686f151e991ecb04d64a39f69be073524971284a598c4c811cc8cbfaada4cbb"} Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.000781 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" podStartSLOduration=7.000764685 podStartE2EDuration="7.000764685s" podCreationTimestamp="2026-02-28 13:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:23.997558343 +0000 UTC m=+1318.239878990" watchObservedRunningTime="2026-02-28 13:38:24.000764685 +0000 UTC m=+1318.243085332" Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.011568 4897 generic.go:334] "Generic (PLEG): container finished" podID="8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" containerID="1f349f8a5bfb5ee92815ca4f6ca7875636abfa533848da899aafd240971f601a" exitCode=0 Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.011654 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9lkmb" event={"ID":"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265","Type":"ContainerDied","Data":"1f349f8a5bfb5ee92815ca4f6ca7875636abfa533848da899aafd240971f601a"} Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.040399 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-844df98d6-6ncv9" podStartSLOduration=7.040381253 podStartE2EDuration="7.040381253s" podCreationTimestamp="2026-02-28 13:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:24.024777332 +0000 UTC m=+1318.267097999" watchObservedRunningTime="2026-02-28 13:38:24.040381253 +0000 UTC m=+1318.282701910" Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.055574 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f69d796b5-nrscn" event={"ID":"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5","Type":"ContainerStarted","Data":"23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8"} Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.055796 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.055858 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f69d796b5-nrscn" event={"ID":"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5","Type":"ContainerStarted","Data":"3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de"} Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.112596 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f69d796b5-nrscn" podStartSLOduration=5.112579608 podStartE2EDuration="5.112579608s" podCreationTimestamp="2026-02-28 13:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:24.106100681 +0000 UTC m=+1318.348421338" watchObservedRunningTime="2026-02-28 13:38:24.112579608 +0000 UTC m=+1318.354900265" Feb 28 13:38:24 crc kubenswrapper[4897]: I0228 13:38:24.632945 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.067034 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fgtj6" event={"ID":"661850a9-a877-476b-b3ae-a6c6f3b3676a","Type":"ContainerStarted","Data":"055a592e8d99d6446218e3f7bed61affb829a0dca995b9cbcfc03dbe444b4339"} Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.072696 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86b347d9-5a82-4e31-9ba3-1e5c82decb50","Type":"ContainerStarted","Data":"899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52"} Feb 28 13:38:25 crc kubenswrapper[4897]: E0228 13:38:25.077405 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:38:25 crc kubenswrapper[4897]: E0228 13:38:25.077545 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wpnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-29kqk_openshift-marketplace(dbe86f80-68e4-4170-8801-cea07c362d5c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:38:25 crc kubenswrapper[4897]: E0228 13:38:25.078745 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.093359 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fgtj6" podStartSLOduration=2.371038053 podStartE2EDuration="42.093337792s" podCreationTimestamp="2026-02-28 13:37:43 +0000 UTC" firstStartedPulling="2026-02-28 13:37:44.824439071 +0000 UTC m=+1279.066759728" lastFinishedPulling="2026-02-28 13:38:24.54673881 +0000 UTC m=+1318.789059467" observedRunningTime="2026-02-28 13:38:25.085008178 +0000 UTC m=+1319.327328835" watchObservedRunningTime="2026-02-28 13:38:25.093337792 +0000 UTC m=+1319.335658449" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.109360 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.109304302 podStartE2EDuration="8.109304302s" podCreationTimestamp="2026-02-28 13:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:25.105783971 +0000 UTC m=+1319.348104628" watchObservedRunningTime="2026-02-28 13:38:25.109304302 +0000 UTC m=+1319.351624959" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.678362 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.684300 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.741647 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-credential-keys\") pod \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.741727 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-config-data\") pod \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.741804 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dg9v\" (UniqueName: \"kubernetes.io/projected/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-kube-api-access-7dg9v\") pod \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.741892 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-scripts\") pod \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.741918 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-fernet-keys\") pod \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.741994 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-combined-ca-bundle\") pod \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\" (UID: \"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265\") " Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.742028 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6mzm\" (UniqueName: \"kubernetes.io/projected/bbeef463-3901-42c4-81ed-d97e793fb8b5-kube-api-access-v6mzm\") pod \"bbeef463-3901-42c4-81ed-d97e793fb8b5\" (UID: \"bbeef463-3901-42c4-81ed-d97e793fb8b5\") " Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.747621 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" (UID: "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.754409 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbeef463-3901-42c4-81ed-d97e793fb8b5-kube-api-access-v6mzm" (OuterVolumeSpecName: "kube-api-access-v6mzm") pod "bbeef463-3901-42c4-81ed-d97e793fb8b5" (UID: "bbeef463-3901-42c4-81ed-d97e793fb8b5"). InnerVolumeSpecName "kube-api-access-v6mzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.759582 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" (UID: "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.774006 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-scripts" (OuterVolumeSpecName: "scripts") pod "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" (UID: "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.786877 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-kube-api-access-7dg9v" (OuterVolumeSpecName: "kube-api-access-7dg9v") pod "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" (UID: "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265"). InnerVolumeSpecName "kube-api-access-7dg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.819236 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" (UID: "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.843732 4897 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.843765 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dg9v\" (UniqueName: \"kubernetes.io/projected/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-kube-api-access-7dg9v\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.844013 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.844022 4897 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.844032 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.844041 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6mzm\" (UniqueName: \"kubernetes.io/projected/bbeef463-3901-42c4-81ed-d97e793fb8b5-kube-api-access-v6mzm\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.865720 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-config-data" (OuterVolumeSpecName: "config-data") pod "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" (UID: "8eb2bcb4-6f6f-4a44-813d-d5e2e2597265"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.944170 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538092-vc49k"] Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.959300 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:25 crc kubenswrapper[4897]: I0228 13:38:25.966901 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538092-vc49k"] Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.098775 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" event={"ID":"bbeef463-3901-42c4-81ed-d97e793fb8b5","Type":"ContainerDied","Data":"3a3f3f17049f1047a96375f5a5a92a47a942ea51f6b0772ebc77fe8b598cefd2"} Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.098814 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a3f3f17049f1047a96375f5a5a92a47a942ea51f6b0772ebc77fe8b598cefd2" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.098949 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538098-9rtv9" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.113728 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9lkmb" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.115375 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9lkmb" event={"ID":"8eb2bcb4-6f6f-4a44-813d-d5e2e2597265","Type":"ContainerDied","Data":"700601762ef78c9a67e22ab661e5c345971e43b67a67b2bcc920d4d85487ffc8"} Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.115411 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="700601762ef78c9a67e22ab661e5c345971e43b67a67b2bcc920d4d85487ffc8" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.210365 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d5c8f94c5-9sc2w"] Feb 28 13:38:26 crc kubenswrapper[4897]: E0228 13:38:26.210777 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbeef463-3901-42c4-81ed-d97e793fb8b5" containerName="oc" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.210793 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbeef463-3901-42c4-81ed-d97e793fb8b5" containerName="oc" Feb 28 13:38:26 crc kubenswrapper[4897]: E0228 13:38:26.210817 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" containerName="keystone-bootstrap" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.210824 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" containerName="keystone-bootstrap" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.211006 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" containerName="keystone-bootstrap" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.211024 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbeef463-3901-42c4-81ed-d97e793fb8b5" containerName="oc" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.211810 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.216702 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.216951 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.217090 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.217159 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxw9x" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.217261 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.217683 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.241359 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d5c8f94c5-9sc2w"] Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.375739 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-config-data\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.375867 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8562\" (UniqueName: \"kubernetes.io/projected/b7c377e3-d32d-49da-801c-155853ae1d70-kube-api-access-s8562\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.376143 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-scripts\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.376256 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-fernet-keys\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.376300 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-internal-tls-certs\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.376335 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-public-tls-certs\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.376355 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-combined-ca-bundle\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.376406 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-credential-keys\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.480372 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-config-data\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.480409 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8562\" (UniqueName: \"kubernetes.io/projected/b7c377e3-d32d-49da-801c-155853ae1d70-kube-api-access-s8562\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.480500 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-scripts\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.484609 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-fernet-keys\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.484718 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-internal-tls-certs\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.484793 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-public-tls-certs\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.484826 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-combined-ca-bundle\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.484926 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-credential-keys\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.485185 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0" path="/var/lib/kubelet/pods/ca5dfb32-b2a8-49ee-bdbb-f61ea61473d0/volumes" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.485613 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.485740 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.489497 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.489724 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.489847 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.499539 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-scripts\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.499715 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-combined-ca-bundle\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.504811 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-config-data\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.507804 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-fernet-keys\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.508656 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-public-tls-certs\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.513669 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-credential-keys\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.514117 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8562\" (UniqueName: \"kubernetes.io/projected/b7c377e3-d32d-49da-801c-155853ae1d70-kube-api-access-s8562\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.522213 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c377e3-d32d-49da-801c-155853ae1d70-internal-tls-certs\") pod \"keystone-d5c8f94c5-9sc2w\" (UID: \"b7c377e3-d32d-49da-801c-155853ae1d70\") " pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.565450 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxw9x" Feb 28 13:38:26 crc kubenswrapper[4897]: I0228 13:38:26.571436 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:27 crc kubenswrapper[4897]: I0228 13:38:27.135294 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d5c8f94c5-9sc2w"] Feb 28 13:38:27 crc kubenswrapper[4897]: I0228 13:38:27.139529 4897 generic.go:334] "Generic (PLEG): container finished" podID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerID="26a3f9ee7928f0abd19f88b38a2ed57fd3b52dd242cbfbf71f2750617f194561" exitCode=1 Feb 28 13:38:27 crc kubenswrapper[4897]: I0228 13:38:27.139604 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerDied","Data":"26a3f9ee7928f0abd19f88b38a2ed57fd3b52dd242cbfbf71f2750617f194561"} Feb 28 13:38:27 crc kubenswrapper[4897]: I0228 13:38:27.140348 4897 scope.go:117] "RemoveContainer" containerID="26a3f9ee7928f0abd19f88b38a2ed57fd3b52dd242cbfbf71f2750617f194561" Feb 28 13:38:27 crc kubenswrapper[4897]: I0228 13:38:27.151856 4897 generic.go:334] "Generic (PLEG): container finished" podID="5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" containerID="35a3970f5e13a727a746265694f0710ca8239257e87518d2964beb9c3efddce0" exitCode=0 Feb 28 13:38:27 crc kubenswrapper[4897]: I0228 13:38:27.151897 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7vjm5" event={"ID":"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6","Type":"ContainerDied","Data":"35a3970f5e13a727a746265694f0710ca8239257e87518d2964beb9c3efddce0"} Feb 28 13:38:28 crc kubenswrapper[4897]: I0228 13:38:28.016945 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:28 crc kubenswrapper[4897]: I0228 13:38:28.086832 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589b5bf549-hvvfk"] Feb 28 13:38:28 crc kubenswrapper[4897]: I0228 13:38:28.087114 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" podUID="23dda98f-2840-432f-876f-e180110c6c12" containerName="dnsmasq-dns" containerID="cri-o://3b4b6fc4c3c84623f5a7937c1a34cf377b3c073e820dade941bd72fde3942816" gracePeriod=10 Feb 28 13:38:28 crc kubenswrapper[4897]: I0228 13:38:28.556534 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:28 crc kubenswrapper[4897]: I0228 13:38:28.556899 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:28 crc kubenswrapper[4897]: I0228 13:38:28.607730 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:28 crc kubenswrapper[4897]: I0228 13:38:28.668856 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:28 crc kubenswrapper[4897]: I0228 13:38:28.808720 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" podUID="23dda98f-2840-432f-876f-e180110c6c12" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.159:5353: connect: connection refused" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.185527 4897 generic.go:334] "Generic (PLEG): container finished" podID="23dda98f-2840-432f-876f-e180110c6c12" containerID="3b4b6fc4c3c84623f5a7937c1a34cf377b3c073e820dade941bd72fde3942816" exitCode=0 Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.185622 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" event={"ID":"23dda98f-2840-432f-876f-e180110c6c12","Type":"ContainerDied","Data":"3b4b6fc4c3c84623f5a7937c1a34cf377b3c073e820dade941bd72fde3942816"} Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.187699 4897 generic.go:334] "Generic (PLEG): container finished" podID="661850a9-a877-476b-b3ae-a6c6f3b3676a" containerID="055a592e8d99d6446218e3f7bed61affb829a0dca995b9cbcfc03dbe444b4339" exitCode=0 Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.188755 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fgtj6" event={"ID":"661850a9-a877-476b-b3ae-a6c6f3b3676a","Type":"ContainerDied","Data":"055a592e8d99d6446218e3f7bed61affb829a0dca995b9cbcfc03dbe444b4339"} Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.188791 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.189180 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:29 crc kubenswrapper[4897]: W0228 13:38:29.368827 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c377e3_d32d_49da_801c_155853ae1d70.slice/crio-b2bc73314f67b71e19adc53218099833c5c73d8a4139779cbc3061aca20fe6c2 WatchSource:0}: Error finding container b2bc73314f67b71e19adc53218099833c5c73d8a4139779cbc3061aca20fe6c2: Status 404 returned error can't find the container with id b2bc73314f67b71e19adc53218099833c5c73d8a4139779cbc3061aca20fe6c2 Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.486438 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7vjm5" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.507923 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.507975 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.507984 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.508004 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.548492 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-scripts\") pod \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.548657 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-logs\") pod \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.549020 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-combined-ca-bundle\") pod \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.549079 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-config-data\") pod \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.549348 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl644\" (UniqueName: \"kubernetes.io/projected/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-kube-api-access-wl644\") pod \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\" (UID: \"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6\") " Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.560510 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-logs" (OuterVolumeSpecName: "logs") pod "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" (UID: "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.576462 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-kube-api-access-wl644" (OuterVolumeSpecName: "kube-api-access-wl644") pod "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" (UID: "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6"). InnerVolumeSpecName "kube-api-access-wl644". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.590576 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-scripts" (OuterVolumeSpecName: "scripts") pod "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" (UID: "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.593294 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" (UID: "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.618457 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-config-data" (OuterVolumeSpecName: "config-data") pod "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" (UID: "5fc315f1-a65d-4ba7-aa89-69ffe04b53a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.638582 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.654226 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.654259 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.654272 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl644\" (UniqueName: \"kubernetes.io/projected/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-kube-api-access-wl644\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.654284 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.654292 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.672257 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.680183 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 28 13:38:29 crc kubenswrapper[4897]: I0228 13:38:29.692140 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.200704 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d5c8f94c5-9sc2w" event={"ID":"b7c377e3-d32d-49da-801c-155853ae1d70","Type":"ContainerStarted","Data":"b2bc73314f67b71e19adc53218099833c5c73d8a4139779cbc3061aca20fe6c2"} Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.204553 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7vjm5" event={"ID":"5fc315f1-a65d-4ba7-aa89-69ffe04b53a6","Type":"ContainerDied","Data":"b025c8d89f1355915452229df555a300e51cc55e81a101d52dc118a34d3a7562"} Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.204616 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b025c8d89f1355915452229df555a300e51cc55e81a101d52dc118a34d3a7562" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.204663 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7vjm5" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.244055 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.633926 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-778b749bdb-bmqwf"] Feb 28 13:38:30 crc kubenswrapper[4897]: E0228 13:38:30.634331 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" containerName="placement-db-sync" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.634342 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" containerName="placement-db-sync" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.634564 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" containerName="placement-db-sync" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.635545 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.637352 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.643250 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.643268 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.643274 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zqz4n" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.646283 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.653563 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-778b749bdb-bmqwf"] Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.804482 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-combined-ca-bundle\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.804771 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-internal-tls-certs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.804952 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-scripts\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.805068 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-public-tls-certs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.805222 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f1a9fc-a42b-488a-a7a6-207157fd1205-logs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.805463 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh76\" (UniqueName: \"kubernetes.io/projected/a2f1a9fc-a42b-488a-a7a6-207157fd1205-kube-api-access-2vh76\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.805574 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-config-data\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.906867 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-internal-tls-certs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.906947 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-scripts\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.906974 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-public-tls-certs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.907009 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f1a9fc-a42b-488a-a7a6-207157fd1205-logs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.907075 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vh76\" (UniqueName: \"kubernetes.io/projected/a2f1a9fc-a42b-488a-a7a6-207157fd1205-kube-api-access-2vh76\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.907094 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-config-data\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.907121 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-combined-ca-bundle\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.907747 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f1a9fc-a42b-488a-a7a6-207157fd1205-logs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.914389 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-config-data\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.914894 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-internal-tls-certs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.915022 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-public-tls-certs\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.917843 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-combined-ca-bundle\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.936572 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f1a9fc-a42b-488a-a7a6-207157fd1205-scripts\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.944821 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vh76\" (UniqueName: \"kubernetes.io/projected/a2f1a9fc-a42b-488a-a7a6-207157fd1205-kube-api-access-2vh76\") pod \"placement-778b749bdb-bmqwf\" (UID: \"a2f1a9fc-a42b-488a-a7a6-207157fd1205\") " pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:30 crc kubenswrapper[4897]: I0228 13:38:30.954937 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:31 crc kubenswrapper[4897]: I0228 13:38:31.212065 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:38:31 crc kubenswrapper[4897]: I0228 13:38:31.212095 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:38:31 crc kubenswrapper[4897]: I0228 13:38:31.502410 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 13:38:31 crc kubenswrapper[4897]: I0228 13:38:31.502463 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 13:38:31 crc kubenswrapper[4897]: I0228 13:38:31.502476 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 13:38:31 crc kubenswrapper[4897]: I0228 13:38:31.502486 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 13:38:31 crc kubenswrapper[4897]: I0228 13:38:31.547866 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 13:38:31 crc kubenswrapper[4897]: I0228 13:38:31.583736 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.118082 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6fb67c45d-s75qr" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.216841 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7df779db98-ljwk8" podUID="e0db6a4f-19e4-488c-bc45-9619565bdf57" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.653881 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.698512 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.749809 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-combined-ca-bundle\") pod \"661850a9-a877-476b-b3ae-a6c6f3b3676a\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.750749 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-db-sync-config-data\") pod \"661850a9-a877-476b-b3ae-a6c6f3b3676a\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.750879 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv9m8\" (UniqueName: \"kubernetes.io/projected/661850a9-a877-476b-b3ae-a6c6f3b3676a-kube-api-access-dv9m8\") pod \"661850a9-a877-476b-b3ae-a6c6f3b3676a\" (UID: \"661850a9-a877-476b-b3ae-a6c6f3b3676a\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.760732 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "661850a9-a877-476b-b3ae-a6c6f3b3676a" (UID: "661850a9-a877-476b-b3ae-a6c6f3b3676a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.761787 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661850a9-a877-476b-b3ae-a6c6f3b3676a-kube-api-access-dv9m8" (OuterVolumeSpecName: "kube-api-access-dv9m8") pod "661850a9-a877-476b-b3ae-a6c6f3b3676a" (UID: "661850a9-a877-476b-b3ae-a6c6f3b3676a"). InnerVolumeSpecName "kube-api-access-dv9m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.850792 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "661850a9-a877-476b-b3ae-a6c6f3b3676a" (UID: "661850a9-a877-476b-b3ae-a6c6f3b3676a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.851729 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-config\") pod \"23dda98f-2840-432f-876f-e180110c6c12\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.851859 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-nb\") pod \"23dda98f-2840-432f-876f-e180110c6c12\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.851908 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-sb\") pod \"23dda98f-2840-432f-876f-e180110c6c12\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.851951 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-svc\") pod \"23dda98f-2840-432f-876f-e180110c6c12\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.851988 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-swift-storage-0\") pod \"23dda98f-2840-432f-876f-e180110c6c12\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.852065 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gd7s\" (UniqueName: \"kubernetes.io/projected/23dda98f-2840-432f-876f-e180110c6c12-kube-api-access-4gd7s\") pod \"23dda98f-2840-432f-876f-e180110c6c12\" (UID: \"23dda98f-2840-432f-876f-e180110c6c12\") " Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.852612 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv9m8\" (UniqueName: \"kubernetes.io/projected/661850a9-a877-476b-b3ae-a6c6f3b3676a-kube-api-access-dv9m8\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.852627 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.852637 4897 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/661850a9-a877-476b-b3ae-a6c6f3b3676a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.857408 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23dda98f-2840-432f-876f-e180110c6c12-kube-api-access-4gd7s" (OuterVolumeSpecName: "kube-api-access-4gd7s") pod "23dda98f-2840-432f-876f-e180110c6c12" (UID: "23dda98f-2840-432f-876f-e180110c6c12"). InnerVolumeSpecName "kube-api-access-4gd7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.954184 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gd7s\" (UniqueName: \"kubernetes.io/projected/23dda98f-2840-432f-876f-e180110c6c12-kube-api-access-4gd7s\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:32 crc kubenswrapper[4897]: I0228 13:38:32.999252 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-778b749bdb-bmqwf"] Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.019665 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-config" (OuterVolumeSpecName: "config") pod "23dda98f-2840-432f-876f-e180110c6c12" (UID: "23dda98f-2840-432f-876f-e180110c6c12"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.042469 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23dda98f-2840-432f-876f-e180110c6c12" (UID: "23dda98f-2840-432f-876f-e180110c6c12"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.049061 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23dda98f-2840-432f-876f-e180110c6c12" (UID: "23dda98f-2840-432f-876f-e180110c6c12"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.049087 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23dda98f-2840-432f-876f-e180110c6c12" (UID: "23dda98f-2840-432f-876f-e180110c6c12"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.056013 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.056173 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.056253 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.056343 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.062757 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23dda98f-2840-432f-876f-e180110c6c12" (UID: "23dda98f-2840-432f-876f-e180110c6c12"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.158648 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dda98f-2840-432f-876f-e180110c6c12-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.230591 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fgtj6" event={"ID":"661850a9-a877-476b-b3ae-a6c6f3b3676a","Type":"ContainerDied","Data":"657c248d8e6829eadc45285f755df35ba872cb91106aa591907ec5ee289f81b9"} Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.230650 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657c248d8e6829eadc45285f755df35ba872cb91106aa591907ec5ee289f81b9" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.230611 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fgtj6" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.233022 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerStarted","Data":"0dc18714380303e1cafd477176a5042930839874559895094e8ed71f336ecd95"} Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.237384 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7aff986-c99b-43a7-afc8-b9194ce17385","Type":"ContainerStarted","Data":"8afe08a8012900b93dcc91888cfb0570e04eb144f0c8e5affa8382e765241f75"} Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.238721 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d5c8f94c5-9sc2w" event={"ID":"b7c377e3-d32d-49da-801c-155853ae1d70","Type":"ContainerStarted","Data":"09cc55017739c6ad24e33c8bbabcf2940a50f306043d1a19803c8d70fbb88040"} Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.239627 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.241752 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-778b749bdb-bmqwf" event={"ID":"a2f1a9fc-a42b-488a-a7a6-207157fd1205","Type":"ContainerStarted","Data":"10403fad9cbb0908258ab09ce2f8b9e872a7532e6915f0e8e151dcdb4cba7863"} Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.252745 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.257629 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589b5bf549-hvvfk" event={"ID":"23dda98f-2840-432f-876f-e180110c6c12","Type":"ContainerDied","Data":"e06eb1a9806e202ee28a707f3dc01b31b22ca8fb412700a835e241bbf3610df9"} Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.257723 4897 scope.go:117] "RemoveContainer" containerID="3b4b6fc4c3c84623f5a7937c1a34cf377b3c073e820dade941bd72fde3942816" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.295745 4897 scope.go:117] "RemoveContainer" containerID="b585eeb1ace494c7d19f81d413135e43fbb58029cb99e32b6d542b606d451b3a" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.306676 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d5c8f94c5-9sc2w" podStartSLOduration=7.306657583 podStartE2EDuration="7.306657583s" podCreationTimestamp="2026-02-28 13:38:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:33.284452993 +0000 UTC m=+1327.526773650" watchObservedRunningTime="2026-02-28 13:38:33.306657583 +0000 UTC m=+1327.548978240" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.338415 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589b5bf549-hvvfk"] Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.340580 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589b5bf549-hvvfk"] Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.377542 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.377592 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.659887 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.660344 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api-log" containerID="cri-o://2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84" gracePeriod=30 Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.660395 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api" containerID="cri-o://b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328" gracePeriod=30 Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.794719 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.795178 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.893939 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.943006 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-755f78ff99-pb5jr"] Feb 28 13:38:33 crc kubenswrapper[4897]: E0228 13:38:33.943453 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23dda98f-2840-432f-876f-e180110c6c12" containerName="dnsmasq-dns" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.943467 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="23dda98f-2840-432f-876f-e180110c6c12" containerName="dnsmasq-dns" Feb 28 13:38:33 crc kubenswrapper[4897]: E0228 13:38:33.943475 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23dda98f-2840-432f-876f-e180110c6c12" containerName="init" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.943481 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="23dda98f-2840-432f-876f-e180110c6c12" containerName="init" Feb 28 13:38:33 crc kubenswrapper[4897]: E0228 13:38:33.943498 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661850a9-a877-476b-b3ae-a6c6f3b3676a" containerName="barbican-db-sync" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.943506 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="661850a9-a877-476b-b3ae-a6c6f3b3676a" containerName="barbican-db-sync" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.943694 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="661850a9-a877-476b-b3ae-a6c6f3b3676a" containerName="barbican-db-sync" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.943715 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="23dda98f-2840-432f-876f-e180110c6c12" containerName="dnsmasq-dns" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.944762 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.951707 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.951982 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ntjzb" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.952187 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 28 13:38:33 crc kubenswrapper[4897]: I0228 13:38:33.972371 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-755f78ff99-pb5jr"] Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.000613 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7566789bf4-gcgqv"] Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.003496 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.007624 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.019601 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7566789bf4-gcgqv"] Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.061324 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59bbf6bdfc-r6m8d"] Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.071324 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.087644 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-combined-ca-bundle\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.087815 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8s5v\" (UniqueName: \"kubernetes.io/projected/8315bc28-3362-4d67-9561-f2b8fa3e69b7-kube-api-access-s8s5v\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.087980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klpr8\" (UniqueName: \"kubernetes.io/projected/ae3d152c-8c19-456d-82a4-184138ae3541-kube-api-access-klpr8\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.088071 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-config-data\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.088271 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-config-data-custom\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.088370 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-config-data-custom\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.088463 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-config-data\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.088550 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8315bc28-3362-4d67-9561-f2b8fa3e69b7-logs\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.088655 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-combined-ca-bundle\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.088746 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae3d152c-8c19-456d-82a4-184138ae3541-logs\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.099035 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59bbf6bdfc-r6m8d"] Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.112776 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74cc48945b-m8vv6"] Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.114201 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.121652 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.142405 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74cc48945b-m8vv6"] Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193449 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klpr8\" (UniqueName: \"kubernetes.io/projected/ae3d152c-8c19-456d-82a4-184138ae3541-kube-api-access-klpr8\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193619 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-sb\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193643 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-config-data\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193676 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data-custom\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193698 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-svc\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193727 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-config-data-custom\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193749 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-config-data-custom\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193775 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-config\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193799 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-config-data\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193828 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-logs\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193856 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8315bc28-3362-4d67-9561-f2b8fa3e69b7-logs\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193891 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-combined-ca-bundle\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193912 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae3d152c-8c19-456d-82a4-184138ae3541-logs\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193927 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp6bq\" (UniqueName: \"kubernetes.io/projected/23ceaea1-59f2-4be2-80bf-176368f401d7-kube-api-access-zp6bq\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193947 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-combined-ca-bundle\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193964 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-combined-ca-bundle\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.193987 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8s5v\" (UniqueName: \"kubernetes.io/projected/8315bc28-3362-4d67-9561-f2b8fa3e69b7-kube-api-access-s8s5v\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.194016 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.194032 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2c2w\" (UniqueName: \"kubernetes.io/projected/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-kube-api-access-n2c2w\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.194067 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-nb\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.194096 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-swift-storage-0\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.196259 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8315bc28-3362-4d67-9561-f2b8fa3e69b7-logs\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.199625 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae3d152c-8c19-456d-82a4-184138ae3541-logs\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.204105 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-config-data\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.205907 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-config-data\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.213688 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-config-data-custom\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.213935 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-combined-ca-bundle\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.214838 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae3d152c-8c19-456d-82a4-184138ae3541-config-data-custom\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.221461 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klpr8\" (UniqueName: \"kubernetes.io/projected/ae3d152c-8c19-456d-82a4-184138ae3541-kube-api-access-klpr8\") pod \"barbican-keystone-listener-7566789bf4-gcgqv\" (UID: \"ae3d152c-8c19-456d-82a4-184138ae3541\") " pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.222958 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8315bc28-3362-4d67-9561-f2b8fa3e69b7-combined-ca-bundle\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.237854 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8s5v\" (UniqueName: \"kubernetes.io/projected/8315bc28-3362-4d67-9561-f2b8fa3e69b7-kube-api-access-s8s5v\") pod \"barbican-worker-755f78ff99-pb5jr\" (UID: \"8315bc28-3362-4d67-9561-f2b8fa3e69b7\") " pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.278378 4897 generic.go:334] "Generic (PLEG): container finished" podID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerID="2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84" exitCode=143 Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.278440 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ea8b2284-fafa-4fca-b367-9cffa5f5a201","Type":"ContainerDied","Data":"2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84"} Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.281747 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h59fj" event={"ID":"bd9edcf1-516a-46a6-a77b-5061505a58d7","Type":"ContainerStarted","Data":"9bc5205a83a60702942ea03fd3eb5c1cbeb80fe2a535067733da00a4a5792087"} Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.282136 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-755f78ff99-pb5jr" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.286029 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-778b749bdb-bmqwf" event={"ID":"a2f1a9fc-a42b-488a-a7a6-207157fd1205","Type":"ContainerStarted","Data":"17cc641016764fa53013b272535bb8606467e834c0d62dfd50c983e1d53e2539"} Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.286064 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-778b749bdb-bmqwf" event={"ID":"a2f1a9fc-a42b-488a-a7a6-207157fd1205","Type":"ContainerStarted","Data":"f33f1fe33387db7f53f2b4843455c0ec8a67d8555f802fe4daadc4ee4407aa86"} Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.286135 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.286206 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295564 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-sb\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295615 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data-custom\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295637 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-svc\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295670 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-config\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295698 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-logs\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295740 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp6bq\" (UniqueName: \"kubernetes.io/projected/23ceaea1-59f2-4be2-80bf-176368f401d7-kube-api-access-zp6bq\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295758 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-combined-ca-bundle\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295786 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295803 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2c2w\" (UniqueName: \"kubernetes.io/projected/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-kube-api-access-n2c2w\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295837 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-nb\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.295863 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-swift-storage-0\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.297456 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-swift-storage-0\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.301239 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-h59fj" podStartSLOduration=4.292619891 podStartE2EDuration="52.301220251s" podCreationTimestamp="2026-02-28 13:37:42 +0000 UTC" firstStartedPulling="2026-02-28 13:37:44.502793524 +0000 UTC m=+1278.745114181" lastFinishedPulling="2026-02-28 13:38:32.511393884 +0000 UTC m=+1326.753714541" observedRunningTime="2026-02-28 13:38:34.295599467 +0000 UTC m=+1328.537920124" watchObservedRunningTime="2026-02-28 13:38:34.301220251 +0000 UTC m=+1328.543540908" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.301601 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-sb\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.303100 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-svc\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.303508 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-nb\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.303668 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-config\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.305010 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data-custom\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.306991 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-logs\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.307892 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-combined-ca-bundle\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.308440 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.324010 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp6bq\" (UniqueName: \"kubernetes.io/projected/23ceaea1-59f2-4be2-80bf-176368f401d7-kube-api-access-zp6bq\") pod \"dnsmasq-dns-59bbf6bdfc-r6m8d\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.329822 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2c2w\" (UniqueName: \"kubernetes.io/projected/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-kube-api-access-n2c2w\") pod \"barbican-api-74cc48945b-m8vv6\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.330789 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.352796 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-778b749bdb-bmqwf" podStartSLOduration=4.340285445 podStartE2EDuration="4.340285445s" podCreationTimestamp="2026-02-28 13:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:34.326648745 +0000 UTC m=+1328.568969402" watchObservedRunningTime="2026-02-28 13:38:34.340285445 +0000 UTC m=+1328.582606102" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.396773 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.435827 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.489130 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23dda98f-2840-432f-876f-e180110c6c12" path="/var/lib/kubelet/pods/23dda98f-2840-432f-876f-e180110c6c12/volumes" Feb 28 13:38:34 crc kubenswrapper[4897]: W0228 13:38:34.761494 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8315bc28_3362_4d67_9561_f2b8fa3e69b7.slice/crio-37d9099ae17089a3650bb7972a7f66b4caab7d51097caa2325aa5eca84cc97ba WatchSource:0}: Error finding container 37d9099ae17089a3650bb7972a7f66b4caab7d51097caa2325aa5eca84cc97ba: Status 404 returned error can't find the container with id 37d9099ae17089a3650bb7972a7f66b4caab7d51097caa2325aa5eca84cc97ba Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.762089 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-755f78ff99-pb5jr"] Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.800400 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": read tcp 10.217.0.2:59860->10.217.0.167:9322: read: connection reset by peer" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.800967 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": read tcp 10.217.0.2:59864->10.217.0.167:9322: read: connection reset by peer" Feb 28 13:38:34 crc kubenswrapper[4897]: E0228 13:38:34.836289 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:38:34 crc kubenswrapper[4897]: E0228 13:38:34.836543 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkf44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d7aff986-c99b-43a7-afc8-b9194ce17385): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:38:34 crc kubenswrapper[4897]: E0228 13:38:34.837763 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.903007 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.903124 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.909599 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 13:38:34 crc kubenswrapper[4897]: I0228 13:38:34.924493 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7566789bf4-gcgqv"] Feb 28 13:38:34 crc kubenswrapper[4897]: W0228 13:38:34.926757 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae3d152c_8c19_456d_82a4_184138ae3541.slice/crio-fef888422afa0c9d557fb4470942604cd9e177c6ab5c08e23c1c391bf47cdf67 WatchSource:0}: Error finding container fef888422afa0c9d557fb4470942604cd9e177c6ab5c08e23c1c391bf47cdf67: Status 404 returned error can't find the container with id fef888422afa0c9d557fb4470942604cd9e177c6ab5c08e23c1c391bf47cdf67 Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.044999 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59bbf6bdfc-r6m8d"] Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.054567 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74cc48945b-m8vv6"] Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.249159 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.356272 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" event={"ID":"23ceaea1-59f2-4be2-80bf-176368f401d7","Type":"ContainerStarted","Data":"0d0aaf491338cf337bd6df6d1ff4989762cee896d840f67e40141aa8e66029f2"} Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.365417 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" event={"ID":"ae3d152c-8c19-456d-82a4-184138ae3541","Type":"ContainerStarted","Data":"fef888422afa0c9d557fb4470942604cd9e177c6ab5c08e23c1c391bf47cdf67"} Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.373990 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-755f78ff99-pb5jr" event={"ID":"8315bc28-3362-4d67-9561-f2b8fa3e69b7","Type":"ContainerStarted","Data":"37d9099ae17089a3650bb7972a7f66b4caab7d51097caa2325aa5eca84cc97ba"} Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.393766 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cc48945b-m8vv6" event={"ID":"cff2212b-2ce8-42ab-85b4-6d4d9789c14b","Type":"ContainerStarted","Data":"28d009f1c31acbfcc0baa2d910ef19bdaaee9f6c8c556c2ddd196290c50e6d17"} Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.419142 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8b2284-fafa-4fca-b367-9cffa5f5a201-logs\") pod \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.419372 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7skz\" (UniqueName: \"kubernetes.io/projected/ea8b2284-fafa-4fca-b367-9cffa5f5a201-kube-api-access-s7skz\") pod \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.419648 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-combined-ca-bundle\") pod \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.419682 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-config-data\") pod \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\" (UID: \"ea8b2284-fafa-4fca-b367-9cffa5f5a201\") " Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.421253 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8b2284-fafa-4fca-b367-9cffa5f5a201-logs" (OuterVolumeSpecName: "logs") pod "ea8b2284-fafa-4fca-b367-9cffa5f5a201" (UID: "ea8b2284-fafa-4fca-b367-9cffa5f5a201"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.421714 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8b2284-fafa-4fca-b367-9cffa5f5a201-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.426672 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8b2284-fafa-4fca-b367-9cffa5f5a201-kube-api-access-s7skz" (OuterVolumeSpecName: "kube-api-access-s7skz") pod "ea8b2284-fafa-4fca-b367-9cffa5f5a201" (UID: "ea8b2284-fafa-4fca-b367-9cffa5f5a201"). InnerVolumeSpecName "kube-api-access-s7skz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.430187 4897 generic.go:334] "Generic (PLEG): container finished" podID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerID="b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328" exitCode=0 Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.430817 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="ceilometer-central-agent" containerID="cri-o://b8c42906a6f7073722e82fc7f395ccaac5c92a998f9d6922a6d37271f20323d1" gracePeriod=30 Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.431350 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="sg-core" containerID="cri-o://8afe08a8012900b93dcc91888cfb0570e04eb144f0c8e5affa8382e765241f75" gracePeriod=30 Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.431375 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.431420 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="ceilometer-notification-agent" containerID="cri-o://9e34f2e6495c1fe81abdf0d50a40373b2771813b4d6d5371f7966b9865bf9d36" gracePeriod=30 Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.431529 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ea8b2284-fafa-4fca-b367-9cffa5f5a201","Type":"ContainerDied","Data":"b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328"} Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.431564 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ea8b2284-fafa-4fca-b367-9cffa5f5a201","Type":"ContainerDied","Data":"5939462f6ecca18c2b701afa47d112f5ae5630f4889b31ef9076ebedcc429c19"} Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.431584 4897 scope.go:117] "RemoveContainer" containerID="b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.467572 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea8b2284-fafa-4fca-b367-9cffa5f5a201" (UID: "ea8b2284-fafa-4fca-b367-9cffa5f5a201"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:35 crc kubenswrapper[4897]: E0228 13:38:35.468157 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.523120 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.523146 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7skz\" (UniqueName: \"kubernetes.io/projected/ea8b2284-fafa-4fca-b367-9cffa5f5a201-kube-api-access-s7skz\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.542759 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-config-data" (OuterVolumeSpecName: "config-data") pod "ea8b2284-fafa-4fca-b367-9cffa5f5a201" (UID: "ea8b2284-fafa-4fca-b367-9cffa5f5a201"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.625460 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8b2284-fafa-4fca-b367-9cffa5f5a201-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.671355 4897 scope.go:117] "RemoveContainer" containerID="2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.769286 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.784481 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.797926 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:38:35 crc kubenswrapper[4897]: E0228 13:38:35.798465 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api-log" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.798485 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api-log" Feb 28 13:38:35 crc kubenswrapper[4897]: E0228 13:38:35.798521 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.798528 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.798713 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.798727 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" containerName="watcher-api-log" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.799718 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.809096 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.809230 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.809373 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.829356 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.934224 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.934392 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7xs\" (UniqueName: \"kubernetes.io/projected/4814ed83-bcac-465c-aaf6-b2acde9b0e13-kube-api-access-gx7xs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.934435 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.934459 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4814ed83-bcac-465c-aaf6-b2acde9b0e13-logs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.934503 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:35 crc kubenswrapper[4897]: I0228 13:38:35.934519 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-config-data\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.036209 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.036262 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-config-data\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.036326 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.037176 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7xs\" (UniqueName: \"kubernetes.io/projected/4814ed83-bcac-465c-aaf6-b2acde9b0e13-kube-api-access-gx7xs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.037226 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.037249 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4814ed83-bcac-465c-aaf6-b2acde9b0e13-logs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.037592 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4814ed83-bcac-465c-aaf6-b2acde9b0e13-logs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.041219 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.041581 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.043992 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.045043 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-config-data\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.057478 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7xs\" (UniqueName: \"kubernetes.io/projected/4814ed83-bcac-465c-aaf6-b2acde9b0e13-kube-api-access-gx7xs\") pod \"watcher-api-0\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.125956 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.486478 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8b2284-fafa-4fca-b367-9cffa5f5a201" path="/var/lib/kubelet/pods/ea8b2284-fafa-4fca-b367-9cffa5f5a201/volumes" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.490577 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cc48945b-m8vv6" event={"ID":"cff2212b-2ce8-42ab-85b4-6d4d9789c14b","Type":"ContainerStarted","Data":"79d0ded566fcb10c322d4dd8298598418e4b7ed3acf7ac070dbd3e2c2e3592b1"} Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.490643 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cc48945b-m8vv6" event={"ID":"cff2212b-2ce8-42ab-85b4-6d4d9789c14b","Type":"ContainerStarted","Data":"f28af7de6664087fc94324df7fe2eb399dbad90a716dbe5db62639a3715c0f1b"} Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.490881 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.490990 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.546747 4897 generic.go:334] "Generic (PLEG): container finished" podID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerID="30f7823fb2f475aa35a3176b59274a60682420c062201452f41773c3ec105492" exitCode=0 Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.546811 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" event={"ID":"23ceaea1-59f2-4be2-80bf-176368f401d7","Type":"ContainerDied","Data":"30f7823fb2f475aa35a3176b59274a60682420c062201452f41773c3ec105492"} Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.576575 4897 generic.go:334] "Generic (PLEG): container finished" podID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerID="8afe08a8012900b93dcc91888cfb0570e04eb144f0c8e5affa8382e765241f75" exitCode=2 Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.576828 4897 generic.go:334] "Generic (PLEG): container finished" podID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerID="b8c42906a6f7073722e82fc7f395ccaac5c92a998f9d6922a6d37271f20323d1" exitCode=0 Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.576873 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7aff986-c99b-43a7-afc8-b9194ce17385","Type":"ContainerDied","Data":"8afe08a8012900b93dcc91888cfb0570e04eb144f0c8e5affa8382e765241f75"} Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.577008 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7aff986-c99b-43a7-afc8-b9194ce17385","Type":"ContainerDied","Data":"b8c42906a6f7073722e82fc7f395ccaac5c92a998f9d6922a6d37271f20323d1"} Feb 28 13:38:36 crc kubenswrapper[4897]: I0228 13:38:36.737038 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74cc48945b-m8vv6" podStartSLOduration=2.737019461 podStartE2EDuration="2.737019461s" podCreationTimestamp="2026-02-28 13:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:36.713757704 +0000 UTC m=+1330.956078381" watchObservedRunningTime="2026-02-28 13:38:36.737019461 +0000 UTC m=+1330.979340118" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.055675 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cc5d7cb8-nws5v"] Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.057834 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.062570 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.065432 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.077078 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cc5d7cb8-nws5v"] Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.158046 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-822hh\" (UniqueName: \"kubernetes.io/projected/d2375f60-8d95-4855-ace5-ecbfadb87114-kube-api-access-822hh\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.158411 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-config-data-custom\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.158435 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2375f60-8d95-4855-ace5-ecbfadb87114-logs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.158598 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-config-data\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.158630 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-public-tls-certs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.158655 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-internal-tls-certs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.158768 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-combined-ca-bundle\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.260527 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-combined-ca-bundle\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.260579 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-822hh\" (UniqueName: \"kubernetes.io/projected/d2375f60-8d95-4855-ace5-ecbfadb87114-kube-api-access-822hh\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.260599 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-config-data-custom\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.260618 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2375f60-8d95-4855-ace5-ecbfadb87114-logs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.260696 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-config-data\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.260713 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-public-tls-certs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.260728 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-internal-tls-certs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.261251 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2375f60-8d95-4855-ace5-ecbfadb87114-logs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.266467 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-public-tls-certs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.266781 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-internal-tls-certs\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.266799 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-config-data-custom\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.267518 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-combined-ca-bundle\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.268112 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2375f60-8d95-4855-ace5-ecbfadb87114-config-data\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.277960 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-822hh\" (UniqueName: \"kubernetes.io/projected/d2375f60-8d95-4855-ace5-ecbfadb87114-kube-api-access-822hh\") pod \"barbican-api-6cc5d7cb8-nws5v\" (UID: \"d2375f60-8d95-4855-ace5-ecbfadb87114\") " pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.380561 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.543224 4897 scope.go:117] "RemoveContainer" containerID="b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328" Feb 28 13:38:37 crc kubenswrapper[4897]: E0228 13:38:37.544007 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328\": container with ID starting with b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328 not found: ID does not exist" containerID="b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.544047 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328"} err="failed to get container status \"b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328\": rpc error: code = NotFound desc = could not find container \"b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328\": container with ID starting with b5d29fd8ec7fe249a3ec203e4cef0d21dacdd74c9da0ed6aba9a314f424d8328 not found: ID does not exist" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.544071 4897 scope.go:117] "RemoveContainer" containerID="2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84" Feb 28 13:38:37 crc kubenswrapper[4897]: E0228 13:38:37.544555 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84\": container with ID starting with 2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84 not found: ID does not exist" containerID="2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.544598 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84"} err="failed to get container status \"2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84\": rpc error: code = NotFound desc = could not find container \"2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84\": container with ID starting with 2424a9ab6a1e9b317522faf4cc0d53b4167eb8549d760e1763e2c7b49b7c5d84 not found: ID does not exist" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.592715 4897 generic.go:334] "Generic (PLEG): container finished" podID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerID="0dc18714380303e1cafd477176a5042930839874559895094e8ed71f336ecd95" exitCode=1 Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.592794 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerDied","Data":"0dc18714380303e1cafd477176a5042930839874559895094e8ed71f336ecd95"} Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.592887 4897 scope.go:117] "RemoveContainer" containerID="26a3f9ee7928f0abd19f88b38a2ed57fd3b52dd242cbfbf71f2750617f194561" Feb 28 13:38:37 crc kubenswrapper[4897]: I0228 13:38:37.593433 4897 scope.go:117] "RemoveContainer" containerID="0dc18714380303e1cafd477176a5042930839874559895094e8ed71f336ecd95" Feb 28 13:38:37 crc kubenswrapper[4897]: E0228 13:38:37.593760 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2b88f822-8f2a-473a-b388-b144a37ba4f0)\"" pod="openstack/watcher-decision-engine-0" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.123167 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:38:38 crc kubenswrapper[4897]: W0228 13:38:38.150762 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4814ed83_bcac_465c_aaf6_b2acde9b0e13.slice/crio-e5ce18eecd7697eb9fd44fa34eec94c8044a9f3ee062707788ac0005908aa546 WatchSource:0}: Error finding container e5ce18eecd7697eb9fd44fa34eec94c8044a9f3ee062707788ac0005908aa546: Status 404 returned error can't find the container with id e5ce18eecd7697eb9fd44fa34eec94c8044a9f3ee062707788ac0005908aa546 Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.232033 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cc5d7cb8-nws5v"] Feb 28 13:38:38 crc kubenswrapper[4897]: W0228 13:38:38.237467 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2375f60_8d95_4855_ace5_ecbfadb87114.slice/crio-4aa58357f3c4d8a5b99d29ee4819fd51b64126df7fe04db8c4494a791a8eeb38 WatchSource:0}: Error finding container 4aa58357f3c4d8a5b99d29ee4819fd51b64126df7fe04db8c4494a791a8eeb38: Status 404 returned error can't find the container with id 4aa58357f3c4d8a5b99d29ee4819fd51b64126df7fe04db8c4494a791a8eeb38 Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.607825 4897 generic.go:334] "Generic (PLEG): container finished" podID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerID="9e34f2e6495c1fe81abdf0d50a40373b2771813b4d6d5371f7966b9865bf9d36" exitCode=0 Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.607875 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7aff986-c99b-43a7-afc8-b9194ce17385","Type":"ContainerDied","Data":"9e34f2e6495c1fe81abdf0d50a40373b2771813b4d6d5371f7966b9865bf9d36"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.608284 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7aff986-c99b-43a7-afc8-b9194ce17385","Type":"ContainerDied","Data":"064dca034838227707564d66d78142ca96bd2b8843684309705195a1b7fa45e6"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.608296 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064dca034838227707564d66d78142ca96bd2b8843684309705195a1b7fa45e6" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.611413 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" event={"ID":"ae3d152c-8c19-456d-82a4-184138ae3541","Type":"ContainerStarted","Data":"7d87156eb0c8faf0730fa54a7e5d48277eae4b2e8a7f5dbcf6703ee66ec46853"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.611438 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" event={"ID":"ae3d152c-8c19-456d-82a4-184138ae3541","Type":"ContainerStarted","Data":"dfc7f2bdb02cc44902dd3bb6360ce660d53f597d2b90668477e9f30ec709134e"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.620080 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4814ed83-bcac-465c-aaf6-b2acde9b0e13","Type":"ContainerStarted","Data":"1c377f01c0e08e63f24dd7d5fda5daadcf629bd0a0d7ee79b18080e68d14d1c3"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.620122 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4814ed83-bcac-465c-aaf6-b2acde9b0e13","Type":"ContainerStarted","Data":"e5ce18eecd7697eb9fd44fa34eec94c8044a9f3ee062707788ac0005908aa546"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.622163 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-755f78ff99-pb5jr" event={"ID":"8315bc28-3362-4d67-9561-f2b8fa3e69b7","Type":"ContainerStarted","Data":"a5f448d7c276bedbe084a79e5fa7e9e4532664ed7cc1c4482a9ccc9beb800f3b"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.622231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-755f78ff99-pb5jr" event={"ID":"8315bc28-3362-4d67-9561-f2b8fa3e69b7","Type":"ContainerStarted","Data":"8c98edaaee90d28f80f059f4bbff1fa8cb780b34c155ca3866b2dea86ebff877"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.630965 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5d7cb8-nws5v" event={"ID":"d2375f60-8d95-4855-ace5-ecbfadb87114","Type":"ContainerStarted","Data":"5f841db5ea47e94b737c2bb6c5c532b7c068e914c01c597f626f7b7fa701a116"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.631218 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5d7cb8-nws5v" event={"ID":"d2375f60-8d95-4855-ace5-ecbfadb87114","Type":"ContainerStarted","Data":"4aa58357f3c4d8a5b99d29ee4819fd51b64126df7fe04db8c4494a791a8eeb38"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.635726 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" event={"ID":"23ceaea1-59f2-4be2-80bf-176368f401d7","Type":"ContainerStarted","Data":"6ded0dcbb72dad3a71c10215f27757095134e6f52e9725f8c9030536abe854ec"} Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.636034 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.640659 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7566789bf4-gcgqv" podStartSLOduration=2.900279488 podStartE2EDuration="5.640637832s" podCreationTimestamp="2026-02-28 13:38:33 +0000 UTC" firstStartedPulling="2026-02-28 13:38:34.933851602 +0000 UTC m=+1329.176172259" lastFinishedPulling="2026-02-28 13:38:37.674209946 +0000 UTC m=+1331.916530603" observedRunningTime="2026-02-28 13:38:38.631025455 +0000 UTC m=+1332.873346112" watchObservedRunningTime="2026-02-28 13:38:38.640637832 +0000 UTC m=+1332.882958489" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.655772 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-755f78ff99-pb5jr" podStartSLOduration=2.749653259 podStartE2EDuration="5.65575371s" podCreationTimestamp="2026-02-28 13:38:33 +0000 UTC" firstStartedPulling="2026-02-28 13:38:34.771493632 +0000 UTC m=+1329.013814289" lastFinishedPulling="2026-02-28 13:38:37.677594083 +0000 UTC m=+1331.919914740" observedRunningTime="2026-02-28 13:38:38.648067273 +0000 UTC m=+1332.890387930" watchObservedRunningTime="2026-02-28 13:38:38.65575371 +0000 UTC m=+1332.898074367" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.677929 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" podStartSLOduration=4.677639152 podStartE2EDuration="4.677639152s" podCreationTimestamp="2026-02-28 13:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:38.677034717 +0000 UTC m=+1332.919355394" watchObservedRunningTime="2026-02-28 13:38:38.677639152 +0000 UTC m=+1332.919959809" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.684728 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.790878 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-log-httpd\") pod \"d7aff986-c99b-43a7-afc8-b9194ce17385\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.791072 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkf44\" (UniqueName: \"kubernetes.io/projected/d7aff986-c99b-43a7-afc8-b9194ce17385-kube-api-access-dkf44\") pod \"d7aff986-c99b-43a7-afc8-b9194ce17385\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.791167 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-run-httpd\") pod \"d7aff986-c99b-43a7-afc8-b9194ce17385\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.791236 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-combined-ca-bundle\") pod \"d7aff986-c99b-43a7-afc8-b9194ce17385\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.791270 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-scripts\") pod \"d7aff986-c99b-43a7-afc8-b9194ce17385\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.791292 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-sg-core-conf-yaml\") pod \"d7aff986-c99b-43a7-afc8-b9194ce17385\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.791347 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-config-data\") pod \"d7aff986-c99b-43a7-afc8-b9194ce17385\" (UID: \"d7aff986-c99b-43a7-afc8-b9194ce17385\") " Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.791533 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7aff986-c99b-43a7-afc8-b9194ce17385" (UID: "d7aff986-c99b-43a7-afc8-b9194ce17385"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.791689 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7aff986-c99b-43a7-afc8-b9194ce17385" (UID: "d7aff986-c99b-43a7-afc8-b9194ce17385"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.792438 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.792695 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7aff986-c99b-43a7-afc8-b9194ce17385-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.798205 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-scripts" (OuterVolumeSpecName: "scripts") pod "d7aff986-c99b-43a7-afc8-b9194ce17385" (UID: "d7aff986-c99b-43a7-afc8-b9194ce17385"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.811589 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7aff986-c99b-43a7-afc8-b9194ce17385-kube-api-access-dkf44" (OuterVolumeSpecName: "kube-api-access-dkf44") pod "d7aff986-c99b-43a7-afc8-b9194ce17385" (UID: "d7aff986-c99b-43a7-afc8-b9194ce17385"). InnerVolumeSpecName "kube-api-access-dkf44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.826616 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7aff986-c99b-43a7-afc8-b9194ce17385" (UID: "d7aff986-c99b-43a7-afc8-b9194ce17385"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.849516 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7aff986-c99b-43a7-afc8-b9194ce17385" (UID: "d7aff986-c99b-43a7-afc8-b9194ce17385"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.872467 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-config-data" (OuterVolumeSpecName: "config-data") pod "d7aff986-c99b-43a7-afc8-b9194ce17385" (UID: "d7aff986-c99b-43a7-afc8-b9194ce17385"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.894530 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkf44\" (UniqueName: \"kubernetes.io/projected/d7aff986-c99b-43a7-afc8-b9194ce17385-kube-api-access-dkf44\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.894565 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.894576 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.894586 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:38 crc kubenswrapper[4897]: I0228 13:38:38.894594 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7aff986-c99b-43a7-afc8-b9194ce17385-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.507703 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.507755 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.508541 4897 scope.go:117] "RemoveContainer" containerID="0dc18714380303e1cafd477176a5042930839874559895094e8ed71f336ecd95" Feb 28 13:38:39 crc kubenswrapper[4897]: E0228 13:38:39.508833 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2b88f822-8f2a-473a-b388-b144a37ba4f0)\"" pod="openstack/watcher-decision-engine-0" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.657959 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4814ed83-bcac-465c-aaf6-b2acde9b0e13","Type":"ContainerStarted","Data":"f47628682bffb126e28a653d3e8a5a4058b2a066ffc641faaa527f77f50962a6"} Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.658663 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.661375 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5d7cb8-nws5v" event={"ID":"d2375f60-8d95-4855-ace5-ecbfadb87114","Type":"ContainerStarted","Data":"4b7483787a4b792f03b97679080b30b27eec292d8c0cd6f7aa52ac255ded71c5"} Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.661516 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.721389 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.721368073 podStartE2EDuration="4.721368073s" podCreationTimestamp="2026-02-28 13:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:39.68272022 +0000 UTC m=+1333.925040917" watchObservedRunningTime="2026-02-28 13:38:39.721368073 +0000 UTC m=+1333.963688750" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.722753 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cc5d7cb8-nws5v" podStartSLOduration=2.722745508 podStartE2EDuration="2.722745508s" podCreationTimestamp="2026-02-28 13:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:39.707038074 +0000 UTC m=+1333.949358751" watchObservedRunningTime="2026-02-28 13:38:39.722745508 +0000 UTC m=+1333.965066175" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.772667 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.788430 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.799408 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:38:39 crc kubenswrapper[4897]: E0228 13:38:39.799838 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="ceilometer-notification-agent" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.799854 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="ceilometer-notification-agent" Feb 28 13:38:39 crc kubenswrapper[4897]: E0228 13:38:39.799874 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="ceilometer-central-agent" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.799880 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="ceilometer-central-agent" Feb 28 13:38:39 crc kubenswrapper[4897]: E0228 13:38:39.799902 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="sg-core" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.799910 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="sg-core" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.800083 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="sg-core" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.800102 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="ceilometer-notification-agent" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.800121 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" containerName="ceilometer-central-agent" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.801789 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.804250 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.804442 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.811508 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.923961 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.924128 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-log-httpd\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.924425 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5sms\" (UniqueName: \"kubernetes.io/projected/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-kube-api-access-r5sms\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.924608 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-run-httpd\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.924654 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.924694 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-scripts\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:39 crc kubenswrapper[4897]: I0228 13:38:39.924774 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-config-data\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.025927 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-config-data\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.026036 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.026115 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-log-httpd\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.026206 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5sms\" (UniqueName: \"kubernetes.io/projected/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-kube-api-access-r5sms\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.026358 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-run-httpd\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.026414 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.026453 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-scripts\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.026886 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-run-httpd\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.027026 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-log-httpd\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.032022 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.032668 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-config-data\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.033044 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.044009 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-scripts\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.064616 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5sms\" (UniqueName: \"kubernetes.io/projected/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-kube-api-access-r5sms\") pod \"ceilometer-0\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.121431 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.473852 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7aff986-c99b-43a7-afc8-b9194ce17385" path="/var/lib/kubelet/pods/d7aff986-c99b-43a7-afc8-b9194ce17385/volumes" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.646267 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.672956 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05b9cde-39ac-43bf-aff2-85f5b1d2acae","Type":"ContainerStarted","Data":"ddd82a8900d5363810db91bd5200432e98766760ab0f83e0f40b21b7798ac7d3"} Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.674143 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:40 crc kubenswrapper[4897]: I0228 13:38:40.674178 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:41 crc kubenswrapper[4897]: I0228 13:38:41.127216 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 28 13:38:41 crc kubenswrapper[4897]: I0228 13:38:41.165710 4897 scope.go:117] "RemoveContainer" containerID="1483000b50b2264fb8ba8e9ee018cecbbb96c56fd84b4f0bda67a21b199841f1" Feb 28 13:38:41 crc kubenswrapper[4897]: I0228 13:38:41.227898 4897 scope.go:117] "RemoveContainer" containerID="19ca9a4e3f0d6e021374b6ad375834aae2c27eed449266345b1ef375f452fbf6" Feb 28 13:38:41 crc kubenswrapper[4897]: I0228 13:38:41.685809 4897 generic.go:334] "Generic (PLEG): container finished" podID="bd9edcf1-516a-46a6-a77b-5061505a58d7" containerID="9bc5205a83a60702942ea03fd3eb5c1cbeb80fe2a535067733da00a4a5792087" exitCode=0 Feb 28 13:38:41 crc kubenswrapper[4897]: I0228 13:38:41.685914 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h59fj" event={"ID":"bd9edcf1-516a-46a6-a77b-5061505a58d7","Type":"ContainerDied","Data":"9bc5205a83a60702942ea03fd3eb5c1cbeb80fe2a535067733da00a4a5792087"} Feb 28 13:38:41 crc kubenswrapper[4897]: I0228 13:38:41.689709 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:38:41 crc kubenswrapper[4897]: I0228 13:38:41.690551 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05b9cde-39ac-43bf-aff2-85f5b1d2acae","Type":"ContainerStarted","Data":"250c6a28fc56f09d45c62ecf9b6c012dd971f01783deaa5f8717a511305060e0"} Feb 28 13:38:41 crc kubenswrapper[4897]: I0228 13:38:41.690578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05b9cde-39ac-43bf-aff2-85f5b1d2acae","Type":"ContainerStarted","Data":"ca52dbc4d3af48283acf71aae46b20c8f9521e1b6d9a67d2467df347da905fe5"} Feb 28 13:38:42 crc kubenswrapper[4897]: I0228 13:38:42.051803 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 28 13:38:42 crc kubenswrapper[4897]: I0228 13:38:42.706499 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05b9cde-39ac-43bf-aff2-85f5b1d2acae","Type":"ContainerStarted","Data":"144ce0b87ac18e9815347f1a56e6a1a6695674a8f4f1e1c1de43c5bab1636154"} Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.100240 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h59fj" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.200778 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-scripts\") pod \"bd9edcf1-516a-46a6-a77b-5061505a58d7\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.200879 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5752j\" (UniqueName: \"kubernetes.io/projected/bd9edcf1-516a-46a6-a77b-5061505a58d7-kube-api-access-5752j\") pod \"bd9edcf1-516a-46a6-a77b-5061505a58d7\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.200922 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd9edcf1-516a-46a6-a77b-5061505a58d7-etc-machine-id\") pod \"bd9edcf1-516a-46a6-a77b-5061505a58d7\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.200942 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-combined-ca-bundle\") pod \"bd9edcf1-516a-46a6-a77b-5061505a58d7\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.201021 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-config-data\") pod \"bd9edcf1-516a-46a6-a77b-5061505a58d7\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.201241 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-db-sync-config-data\") pod \"bd9edcf1-516a-46a6-a77b-5061505a58d7\" (UID: \"bd9edcf1-516a-46a6-a77b-5061505a58d7\") " Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.201242 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd9edcf1-516a-46a6-a77b-5061505a58d7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bd9edcf1-516a-46a6-a77b-5061505a58d7" (UID: "bd9edcf1-516a-46a6-a77b-5061505a58d7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.207604 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9edcf1-516a-46a6-a77b-5061505a58d7-kube-api-access-5752j" (OuterVolumeSpecName: "kube-api-access-5752j") pod "bd9edcf1-516a-46a6-a77b-5061505a58d7" (UID: "bd9edcf1-516a-46a6-a77b-5061505a58d7"). InnerVolumeSpecName "kube-api-access-5752j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.218655 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-scripts" (OuterVolumeSpecName: "scripts") pod "bd9edcf1-516a-46a6-a77b-5061505a58d7" (UID: "bd9edcf1-516a-46a6-a77b-5061505a58d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.218669 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bd9edcf1-516a-46a6-a77b-5061505a58d7" (UID: "bd9edcf1-516a-46a6-a77b-5061505a58d7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.260507 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd9edcf1-516a-46a6-a77b-5061505a58d7" (UID: "bd9edcf1-516a-46a6-a77b-5061505a58d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.282141 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-config-data" (OuterVolumeSpecName: "config-data") pod "bd9edcf1-516a-46a6-a77b-5061505a58d7" (UID: "bd9edcf1-516a-46a6-a77b-5061505a58d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.304198 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5752j\" (UniqueName: \"kubernetes.io/projected/bd9edcf1-516a-46a6-a77b-5061505a58d7-kube-api-access-5752j\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.304255 4897 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd9edcf1-516a-46a6-a77b-5061505a58d7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.304269 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.304281 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.304295 4897 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.304331 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9edcf1-516a-46a6-a77b-5061505a58d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:43 crc kubenswrapper[4897]: E0228 13:38:43.551262 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:38:43 crc kubenswrapper[4897]: E0228 13:38:43.551475 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5sms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f05b9cde-39ac-43bf-aff2-85f5b1d2acae): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:38:43 crc kubenswrapper[4897]: E0228 13:38:43.552944 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.723488 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h59fj" Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.723494 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h59fj" event={"ID":"bd9edcf1-516a-46a6-a77b-5061505a58d7","Type":"ContainerDied","Data":"c5185e76c1fc66c3cf72be3b666a7476464ba1d7d181dc393c829e31549fcc67"} Feb 28 13:38:43 crc kubenswrapper[4897]: I0228 13:38:43.724672 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5185e76c1fc66c3cf72be3b666a7476464ba1d7d181dc393c829e31549fcc67" Feb 28 13:38:43 crc kubenswrapper[4897]: E0228 13:38:43.729302 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.011931 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:44 crc kubenswrapper[4897]: E0228 13:38:44.012338 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9edcf1-516a-46a6-a77b-5061505a58d7" containerName="cinder-db-sync" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.012354 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9edcf1-516a-46a6-a77b-5061505a58d7" containerName="cinder-db-sync" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.015687 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9edcf1-516a-46a6-a77b-5061505a58d7" containerName="cinder-db-sync" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.017624 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.025128 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zq2v9" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.035655 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.035831 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.036691 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.063676 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.081433 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bbf6bdfc-r6m8d"] Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.082038 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" podUID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerName="dnsmasq-dns" containerID="cri-o://6ded0dcbb72dad3a71c10215f27757095134e6f52e9725f8c9030536abe854ec" gracePeriod=10 Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.086574 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.184856 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cdbcc745-rbcfg"] Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.191906 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.191980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.192097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.192127 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-scripts\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.192163 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r886r\" (UniqueName: \"kubernetes.io/projected/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-kube-api-access-r886r\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.192183 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.194415 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.241357 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cdbcc745-rbcfg"] Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.305471 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-svc\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.305685 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.305708 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqg2b\" (UniqueName: \"kubernetes.io/projected/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-kube-api-access-kqg2b\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.305730 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.305823 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-sb\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.305882 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.305908 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-scripts\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.306043 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r886r\" (UniqueName: \"kubernetes.io/projected/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-kube-api-access-r886r\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.306068 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.306043 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.306335 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-config\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.306491 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-swift-storage-0\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.306602 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-nb\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.312274 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.313675 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.314438 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.317993 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.318227 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-scripts\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.321062 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.331220 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.333813 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r886r\" (UniqueName: \"kubernetes.io/projected/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-kube-api-access-r886r\") pod \"cinder-scheduler-0\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.361371 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.397649 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" podUID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.401631 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407774 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-config\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407813 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a004b575-3521-45fd-84d2-9c2c46cac69a-logs\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407845 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-swift-storage-0\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407867 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-nb\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407893 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-svc\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407914 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data-custom\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407929 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-scripts\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407955 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a004b575-3521-45fd-84d2-9c2c46cac69a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.407985 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.408009 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqg2b\" (UniqueName: \"kubernetes.io/projected/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-kube-api-access-kqg2b\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.408045 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.408062 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-sb\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.408121 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f7k6\" (UniqueName: \"kubernetes.io/projected/a004b575-3521-45fd-84d2-9c2c46cac69a-kube-api-access-4f7k6\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.409041 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-config\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.409129 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-swift-storage-0\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.410061 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-sb\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.410109 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-nb\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.411113 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-svc\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.479511 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqg2b\" (UniqueName: \"kubernetes.io/projected/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-kube-api-access-kqg2b\") pod \"dnsmasq-dns-79cdbcc745-rbcfg\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.509609 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data-custom\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.509643 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-scripts\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.509668 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a004b575-3521-45fd-84d2-9c2c46cac69a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.509714 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.509769 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.509840 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f7k6\" (UniqueName: \"kubernetes.io/projected/a004b575-3521-45fd-84d2-9c2c46cac69a-kube-api-access-4f7k6\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.509890 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a004b575-3521-45fd-84d2-9c2c46cac69a-logs\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.513519 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a004b575-3521-45fd-84d2-9c2c46cac69a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.517236 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a004b575-3521-45fd-84d2-9c2c46cac69a-logs\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.528274 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.531396 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data-custom\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.531807 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.536480 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-scripts\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.539443 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f7k6\" (UniqueName: \"kubernetes.io/projected/a004b575-3521-45fd-84d2-9c2c46cac69a-kube-api-access-4f7k6\") pod \"cinder-api-0\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.707160 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.724952 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.757839 4897 generic.go:334] "Generic (PLEG): container finished" podID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerID="6ded0dcbb72dad3a71c10215f27757095134e6f52e9725f8c9030536abe854ec" exitCode=0 Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.757879 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" event={"ID":"23ceaea1-59f2-4be2-80bf-176368f401d7","Type":"ContainerDied","Data":"6ded0dcbb72dad3a71c10215f27757095134e6f52e9725f8c9030536abe854ec"} Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.812461 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.917989 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-nb\") pod \"23ceaea1-59f2-4be2-80bf-176368f401d7\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.918335 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-config\") pod \"23ceaea1-59f2-4be2-80bf-176368f401d7\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.918354 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-sb\") pod \"23ceaea1-59f2-4be2-80bf-176368f401d7\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.918389 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-svc\") pod \"23ceaea1-59f2-4be2-80bf-176368f401d7\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.918415 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-swift-storage-0\") pod \"23ceaea1-59f2-4be2-80bf-176368f401d7\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.918432 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp6bq\" (UniqueName: \"kubernetes.io/projected/23ceaea1-59f2-4be2-80bf-176368f401d7-kube-api-access-zp6bq\") pod \"23ceaea1-59f2-4be2-80bf-176368f401d7\" (UID: \"23ceaea1-59f2-4be2-80bf-176368f401d7\") " Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.971224 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:38:44 crc kubenswrapper[4897]: I0228 13:38:44.972029 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ceaea1-59f2-4be2-80bf-176368f401d7-kube-api-access-zp6bq" (OuterVolumeSpecName: "kube-api-access-zp6bq") pod "23ceaea1-59f2-4be2-80bf-176368f401d7" (UID: "23ceaea1-59f2-4be2-80bf-176368f401d7"). InnerVolumeSpecName "kube-api-access-zp6bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.020974 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp6bq\" (UniqueName: \"kubernetes.io/projected/23ceaea1-59f2-4be2-80bf-176368f401d7-kube-api-access-zp6bq\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.031058 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23ceaea1-59f2-4be2-80bf-176368f401d7" (UID: "23ceaea1-59f2-4be2-80bf-176368f401d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.062861 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23ceaea1-59f2-4be2-80bf-176368f401d7" (UID: "23ceaea1-59f2-4be2-80bf-176368f401d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.068290 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23ceaea1-59f2-4be2-80bf-176368f401d7" (UID: "23ceaea1-59f2-4be2-80bf-176368f401d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.071876 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:38:45 crc kubenswrapper[4897]: W0228 13:38:45.116735 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7bd80a3_8929_44e4_b11e_3daebb9c7f54.slice/crio-e3028451b6910af1e00f4af7c9921d634304c1cf9ab9e150ad7c051bb6ddf046 WatchSource:0}: Error finding container e3028451b6910af1e00f4af7c9921d634304c1cf9ab9e150ad7c051bb6ddf046: Status 404 returned error can't find the container with id e3028451b6910af1e00f4af7c9921d634304c1cf9ab9e150ad7c051bb6ddf046 Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.126290 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.126327 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.126336 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.131183 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.137517 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-config" (OuterVolumeSpecName: "config") pod "23ceaea1-59f2-4be2-80bf-176368f401d7" (UID: "23ceaea1-59f2-4be2-80bf-176368f401d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.147658 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23ceaea1-59f2-4be2-80bf-176368f401d7" (UID: "23ceaea1-59f2-4be2-80bf-176368f401d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.227603 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.227639 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ceaea1-59f2-4be2-80bf-176368f401d7-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.384030 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cdbcc745-rbcfg"] Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.634687 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.800687 4897 generic.go:334] "Generic (PLEG): container finished" podID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerID="6dcf84bd9d647e4d07abd6c273a6af57add8f9bad79dad135081efbd28793f8e" exitCode=137 Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.800955 4897 generic.go:334] "Generic (PLEG): container finished" podID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerID="c99617d67e5f0d9c56affd27bad0e543044ee0bfb1d0e4b8f66e6706e8ea3ea1" exitCode=137 Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.801015 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bc5769f5-kt85c" event={"ID":"015cae83-dbd9-4d4b-84f6-e90aa405acf2","Type":"ContainerDied","Data":"6dcf84bd9d647e4d07abd6c273a6af57add8f9bad79dad135081efbd28793f8e"} Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.801041 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bc5769f5-kt85c" event={"ID":"015cae83-dbd9-4d4b-84f6-e90aa405acf2","Type":"ContainerDied","Data":"c99617d67e5f0d9c56affd27bad0e543044ee0bfb1d0e4b8f66e6706e8ea3ea1"} Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.813334 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" event={"ID":"a5f83c96-ea10-4ba7-a711-5d87f1bf412e","Type":"ContainerStarted","Data":"9cf317770d72d1de71176453818a16568c95678c7d365435cf509e60450f057e"} Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.859978 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7bd80a3-8929-44e4-b11e-3daebb9c7f54","Type":"ContainerStarted","Data":"e3028451b6910af1e00f4af7c9921d634304c1cf9ab9e150ad7c051bb6ddf046"} Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.862208 4897 generic.go:334] "Generic (PLEG): container finished" podID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerID="1a5891b0efb61498c194c201cf1403fc4c3055b8cd8e3b3452b8a2cc45cd0d86" exitCode=137 Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.862235 4897 generic.go:334] "Generic (PLEG): container finished" podID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerID="9721bc594ef84e9aee28c0e39b2571bc5828856084c9ee6a6900df961e383587" exitCode=137 Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.862292 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85bccd86cc-mcgvg" event={"ID":"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81","Type":"ContainerDied","Data":"1a5891b0efb61498c194c201cf1403fc4c3055b8cd8e3b3452b8a2cc45cd0d86"} Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.862335 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85bccd86cc-mcgvg" event={"ID":"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81","Type":"ContainerDied","Data":"9721bc594ef84e9aee28c0e39b2571bc5828856084c9ee6a6900df961e383587"} Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.869476 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a004b575-3521-45fd-84d2-9c2c46cac69a","Type":"ContainerStarted","Data":"d16823d2076a7b771a087eb82bc3bda5e2226615e2fcd1a4ace4e255e23d438b"} Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.874658 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" event={"ID":"23ceaea1-59f2-4be2-80bf-176368f401d7","Type":"ContainerDied","Data":"0d0aaf491338cf337bd6df6d1ff4989762cee896d840f67e40141aa8e66029f2"} Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.874690 4897 scope.go:117] "RemoveContainer" containerID="6ded0dcbb72dad3a71c10215f27757095134e6f52e9725f8c9030536abe854ec" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.874839 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bbf6bdfc-r6m8d" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.925272 4897 scope.go:117] "RemoveContainer" containerID="30f7823fb2f475aa35a3176b59274a60682420c062201452f41773c3ec105492" Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.957547 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bbf6bdfc-r6m8d"] Feb 28 13:38:45 crc kubenswrapper[4897]: I0228 13:38:45.966152 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59bbf6bdfc-r6m8d"] Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.034472 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.061940 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-scripts\") pod \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.062010 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-config-data\") pod \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.062032 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-logs\") pod \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.062057 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-horizon-secret-key\") pod \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.062289 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bms8x\" (UniqueName: \"kubernetes.io/projected/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-kube-api-access-bms8x\") pod \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\" (UID: \"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.067085 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-logs" (OuterVolumeSpecName: "logs") pod "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" (UID: "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.074480 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-kube-api-access-bms8x" (OuterVolumeSpecName: "kube-api-access-bms8x") pod "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" (UID: "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81"). InnerVolumeSpecName "kube-api-access-bms8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.076407 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" (UID: "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.115836 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-config-data" (OuterVolumeSpecName: "config-data") pod "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" (UID: "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.120850 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.127667 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.129940 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-scripts" (OuterVolumeSpecName: "scripts") pod "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" (UID: "cef93ead-4ac6-4a39-aa6f-17c85a2a9a81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.131722 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.161401 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.166633 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-scripts\") pod \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.166699 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/015cae83-dbd9-4d4b-84f6-e90aa405acf2-horizon-secret-key\") pod \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.166754 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k579\" (UniqueName: \"kubernetes.io/projected/015cae83-dbd9-4d4b-84f6-e90aa405acf2-kube-api-access-9k579\") pod \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.166876 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-config-data\") pod \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.166988 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015cae83-dbd9-4d4b-84f6-e90aa405acf2-logs\") pod \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\" (UID: \"015cae83-dbd9-4d4b-84f6-e90aa405acf2\") " Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.173668 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/015cae83-dbd9-4d4b-84f6-e90aa405acf2-logs" (OuterVolumeSpecName: "logs") pod "015cae83-dbd9-4d4b-84f6-e90aa405acf2" (UID: "015cae83-dbd9-4d4b-84f6-e90aa405acf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.183464 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bms8x\" (UniqueName: \"kubernetes.io/projected/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-kube-api-access-bms8x\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.183507 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.183518 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.183534 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.183546 4897 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.183555 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015cae83-dbd9-4d4b-84f6-e90aa405acf2-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.184738 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015cae83-dbd9-4d4b-84f6-e90aa405acf2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "015cae83-dbd9-4d4b-84f6-e90aa405acf2" (UID: "015cae83-dbd9-4d4b-84f6-e90aa405acf2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.185500 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015cae83-dbd9-4d4b-84f6-e90aa405acf2-kube-api-access-9k579" (OuterVolumeSpecName: "kube-api-access-9k579") pod "015cae83-dbd9-4d4b-84f6-e90aa405acf2" (UID: "015cae83-dbd9-4d4b-84f6-e90aa405acf2"). InnerVolumeSpecName "kube-api-access-9k579". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.215966 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-scripts" (OuterVolumeSpecName: "scripts") pod "015cae83-dbd9-4d4b-84f6-e90aa405acf2" (UID: "015cae83-dbd9-4d4b-84f6-e90aa405acf2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.260046 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-config-data" (OuterVolumeSpecName: "config-data") pod "015cae83-dbd9-4d4b-84f6-e90aa405acf2" (UID: "015cae83-dbd9-4d4b-84f6-e90aa405acf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.300576 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.300610 4897 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/015cae83-dbd9-4d4b-84f6-e90aa405acf2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.300624 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k579\" (UniqueName: \"kubernetes.io/projected/015cae83-dbd9-4d4b-84f6-e90aa405acf2-kube-api-access-9k579\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.300633 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/015cae83-dbd9-4d4b-84f6-e90aa405acf2-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.570415 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ceaea1-59f2-4be2-80bf-176368f401d7" path="/var/lib/kubelet/pods/23ceaea1-59f2-4be2-80bf-176368f401d7/volumes" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.856957 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.896632 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68bc5769f5-kt85c" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.896622 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bc5769f5-kt85c" event={"ID":"015cae83-dbd9-4d4b-84f6-e90aa405acf2","Type":"ContainerDied","Data":"41b0470c98c47a27ba6690861025ce0d0ba09118d48ccc3568a808d9acb60781"} Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.897107 4897 scope.go:117] "RemoveContainer" containerID="6dcf84bd9d647e4d07abd6c273a6af57add8f9bad79dad135081efbd28793f8e" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.898219 4897 generic.go:334] "Generic (PLEG): container finished" podID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" containerID="7c8672d83edb549270607a5be029baed438ce93ed4af26bc194443e6b3d4cecd" exitCode=0 Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.898292 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" event={"ID":"a5f83c96-ea10-4ba7-a711-5d87f1bf412e","Type":"ContainerDied","Data":"7c8672d83edb549270607a5be029baed438ce93ed4af26bc194443e6b3d4cecd"} Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.905098 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85bccd86cc-mcgvg" event={"ID":"cef93ead-4ac6-4a39-aa6f-17c85a2a9a81","Type":"ContainerDied","Data":"a9fb18997cd0b01a40bc1484e2080e23d330ce4bc6052bf70654c97adab052d0"} Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.905224 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85bccd86cc-mcgvg" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.922469 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.940213 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68bc5769f5-kt85c"] Feb 28 13:38:46 crc kubenswrapper[4897]: I0228 13:38:46.949766 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68bc5769f5-kt85c"] Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.001937 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.096373 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85bccd86cc-mcgvg"] Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.112073 4897 scope.go:117] "RemoveContainer" containerID="c99617d67e5f0d9c56affd27bad0e543044ee0bfb1d0e4b8f66e6706e8ea3ea1" Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.120859 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85bccd86cc-mcgvg"] Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.160432 4897 scope.go:117] "RemoveContainer" containerID="1a5891b0efb61498c194c201cf1403fc4c3055b8cd8e3b3452b8a2cc45cd0d86" Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.459475 4897 scope.go:117] "RemoveContainer" containerID="9721bc594ef84e9aee28c0e39b2571bc5828856084c9ee6a6900df961e383587" Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.643070 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7df779db98-ljwk8" Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.792660 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fb67c45d-s75qr"] Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.793157 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6fb67c45d-s75qr" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon-log" containerID="cri-o://ac909c528d6a37a6661858d1af31fcfccc18f72e2110aae3aef663642f29b3a7" gracePeriod=30 Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.793617 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6fb67c45d-s75qr" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" containerID="cri-o://2fc3fb7a660268704953fa4bd24b93db1256492df8f5818ec8132b76f2ceb191" gracePeriod=30 Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.809358 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fb67c45d-s75qr" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.948890 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a004b575-3521-45fd-84d2-9c2c46cac69a","Type":"ContainerStarted","Data":"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009"} Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.971558 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" event={"ID":"a5f83c96-ea10-4ba7-a711-5d87f1bf412e","Type":"ContainerStarted","Data":"6d63fe3b3d290fedab1c117f6f6e4c6410336b82bf87d96cef42f9946b8a4b81"} Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.972421 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:47 crc kubenswrapper[4897]: I0228 13:38:47.979069 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7bd80a3-8929-44e4-b11e-3daebb9c7f54","Type":"ContainerStarted","Data":"955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d"} Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.244894 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.279022 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" podStartSLOduration=4.2790045 podStartE2EDuration="4.2790045s" podCreationTimestamp="2026-02-28 13:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:47.99560522 +0000 UTC m=+1342.237925877" watchObservedRunningTime="2026-02-28 13:38:48.2790045 +0000 UTC m=+1342.521325157" Feb 28 13:38:48 crc kubenswrapper[4897]: E0228 13:38:48.461653 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.470078 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" path="/var/lib/kubelet/pods/015cae83-dbd9-4d4b-84f6-e90aa405acf2/volumes" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.470964 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" path="/var/lib/kubelet/pods/cef93ead-4ac6-4a39-aa6f-17c85a2a9a81/volumes" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.598093 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f69d796b5-nrscn"] Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.598382 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f69d796b5-nrscn" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-api" containerID="cri-o://3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de" gracePeriod=30 Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.598626 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f69d796b5-nrscn" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-httpd" containerID="cri-o://23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8" gracePeriod=30 Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.607658 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-f69d796b5-nrscn" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.174:9696/\": read tcp 10.217.0.2:37764->10.217.0.174:9696: read: connection reset by peer" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.629625 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59b7cd74f9-xphhh"] Feb 28 13:38:48 crc kubenswrapper[4897]: E0228 13:38:48.630045 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerName="horizon" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630065 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerName="horizon" Feb 28 13:38:48 crc kubenswrapper[4897]: E0228 13:38:48.630113 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerName="horizon" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630121 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerName="horizon" Feb 28 13:38:48 crc kubenswrapper[4897]: E0228 13:38:48.630132 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerName="dnsmasq-dns" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630139 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerName="dnsmasq-dns" Feb 28 13:38:48 crc kubenswrapper[4897]: E0228 13:38:48.630154 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerName="init" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630160 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerName="init" Feb 28 13:38:48 crc kubenswrapper[4897]: E0228 13:38:48.630169 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerName="horizon-log" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630175 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerName="horizon-log" Feb 28 13:38:48 crc kubenswrapper[4897]: E0228 13:38:48.630192 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerName="horizon-log" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630198 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerName="horizon-log" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630442 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ceaea1-59f2-4be2-80bf-176368f401d7" containerName="dnsmasq-dns" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630454 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerName="horizon" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630465 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerName="horizon-log" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630476 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef93ead-4ac6-4a39-aa6f-17c85a2a9a81" containerName="horizon-log" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.630490 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="015cae83-dbd9-4d4b-84f6-e90aa405acf2" containerName="horizon" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.631928 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.647769 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59b7cd74f9-xphhh"] Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.684667 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-config\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.684708 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-public-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.684781 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-internal-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.684869 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-combined-ca-bundle\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.684895 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-httpd-config\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.684923 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-ovndb-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.685058 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss2qj\" (UniqueName: \"kubernetes.io/projected/cfe88e43-2315-4773-85fa-459dab7fb23d-kube-api-access-ss2qj\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.786832 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-config\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.786876 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-public-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.786914 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-internal-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.786957 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-combined-ca-bundle\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.786983 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-httpd-config\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.787007 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-ovndb-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.787040 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss2qj\" (UniqueName: \"kubernetes.io/projected/cfe88e43-2315-4773-85fa-459dab7fb23d-kube-api-access-ss2qj\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.799144 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-public-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.801506 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-ovndb-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.801785 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-httpd-config\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.801852 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-combined-ca-bundle\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.804710 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-internal-tls-certs\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.816075 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfe88e43-2315-4773-85fa-459dab7fb23d-config\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.834604 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss2qj\" (UniqueName: \"kubernetes.io/projected/cfe88e43-2315-4773-85fa-459dab7fb23d-kube-api-access-ss2qj\") pod \"neutron-59b7cd74f9-xphhh\" (UID: \"cfe88e43-2315-4773-85fa-459dab7fb23d\") " pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.949665 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.988963 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a004b575-3521-45fd-84d2-9c2c46cac69a","Type":"ContainerStarted","Data":"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe"} Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.989426 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerName="cinder-api-log" containerID="cri-o://3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009" gracePeriod=30 Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.989612 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 28 13:38:48 crc kubenswrapper[4897]: I0228 13:38:48.990004 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerName="cinder-api" containerID="cri-o://afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe" gracePeriod=30 Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.016677 4897 generic.go:334] "Generic (PLEG): container finished" podID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerID="23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8" exitCode=0 Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.016927 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f69d796b5-nrscn" event={"ID":"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5","Type":"ContainerDied","Data":"23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8"} Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.028101 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7bd80a3-8929-44e4-b11e-3daebb9c7f54","Type":"ContainerStarted","Data":"8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6"} Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.029167 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.02915645 podStartE2EDuration="5.02915645s" podCreationTimestamp="2026-02-28 13:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:49.017241153 +0000 UTC m=+1343.259561810" watchObservedRunningTime="2026-02-28 13:38:49.02915645 +0000 UTC m=+1343.271477107" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.052348 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.466276841 podStartE2EDuration="6.052331995s" podCreationTimestamp="2026-02-28 13:38:43 +0000 UTC" firstStartedPulling="2026-02-28 13:38:45.134397042 +0000 UTC m=+1339.376717699" lastFinishedPulling="2026-02-28 13:38:45.720452196 +0000 UTC m=+1339.962772853" observedRunningTime="2026-02-28 13:38:49.048266461 +0000 UTC m=+1343.290587118" watchObservedRunningTime="2026-02-28 13:38:49.052331995 +0000 UTC m=+1343.294652652" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.404402 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.675796 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59b7cd74f9-xphhh"] Feb 28 13:38:49 crc kubenswrapper[4897]: W0228 13:38:49.698713 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfe88e43_2315_4773_85fa_459dab7fb23d.slice/crio-dfbb48b09d34ffb456517cd109d97c1dcbf9c911c2905a8db6c041b8f426e89a WatchSource:0}: Error finding container dfbb48b09d34ffb456517cd109d97c1dcbf9c911c2905a8db6c041b8f426e89a: Status 404 returned error can't find the container with id dfbb48b09d34ffb456517cd109d97c1dcbf9c911c2905a8db6c041b8f426e89a Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.747145 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.819812 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a004b575-3521-45fd-84d2-9c2c46cac69a-logs\") pod \"a004b575-3521-45fd-84d2-9c2c46cac69a\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.819888 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data\") pod \"a004b575-3521-45fd-84d2-9c2c46cac69a\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.819970 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-scripts\") pod \"a004b575-3521-45fd-84d2-9c2c46cac69a\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.820005 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a004b575-3521-45fd-84d2-9c2c46cac69a-etc-machine-id\") pod \"a004b575-3521-45fd-84d2-9c2c46cac69a\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.820026 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data-custom\") pod \"a004b575-3521-45fd-84d2-9c2c46cac69a\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.820119 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f7k6\" (UniqueName: \"kubernetes.io/projected/a004b575-3521-45fd-84d2-9c2c46cac69a-kube-api-access-4f7k6\") pod \"a004b575-3521-45fd-84d2-9c2c46cac69a\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.820138 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-combined-ca-bundle\") pod \"a004b575-3521-45fd-84d2-9c2c46cac69a\" (UID: \"a004b575-3521-45fd-84d2-9c2c46cac69a\") " Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.824457 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a004b575-3521-45fd-84d2-9c2c46cac69a-logs" (OuterVolumeSpecName: "logs") pod "a004b575-3521-45fd-84d2-9c2c46cac69a" (UID: "a004b575-3521-45fd-84d2-9c2c46cac69a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.824804 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a004b575-3521-45fd-84d2-9c2c46cac69a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a004b575-3521-45fd-84d2-9c2c46cac69a" (UID: "a004b575-3521-45fd-84d2-9c2c46cac69a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.828466 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-scripts" (OuterVolumeSpecName: "scripts") pod "a004b575-3521-45fd-84d2-9c2c46cac69a" (UID: "a004b575-3521-45fd-84d2-9c2c46cac69a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.828941 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a004b575-3521-45fd-84d2-9c2c46cac69a" (UID: "a004b575-3521-45fd-84d2-9c2c46cac69a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.830778 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a004b575-3521-45fd-84d2-9c2c46cac69a-kube-api-access-4f7k6" (OuterVolumeSpecName: "kube-api-access-4f7k6") pod "a004b575-3521-45fd-84d2-9c2c46cac69a" (UID: "a004b575-3521-45fd-84d2-9c2c46cac69a"). InnerVolumeSpecName "kube-api-access-4f7k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.860494 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.902759 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data" (OuterVolumeSpecName: "config-data") pod "a004b575-3521-45fd-84d2-9c2c46cac69a" (UID: "a004b575-3521-45fd-84d2-9c2c46cac69a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.903561 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a004b575-3521-45fd-84d2-9c2c46cac69a" (UID: "a004b575-3521-45fd-84d2-9c2c46cac69a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.922871 4897 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a004b575-3521-45fd-84d2-9c2c46cac69a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.922898 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.922907 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f7k6\" (UniqueName: \"kubernetes.io/projected/a004b575-3521-45fd-84d2-9c2c46cac69a-kube-api-access-4f7k6\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.922919 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.922931 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a004b575-3521-45fd-84d2-9c2c46cac69a-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.922943 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.922953 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a004b575-3521-45fd-84d2-9c2c46cac69a-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:49 crc kubenswrapper[4897]: I0228 13:38:49.923796 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fb67c45d-s75qr" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:41800->10.217.0.163:8443: read: connection reset by peer" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.040415 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cc5d7cb8-nws5v" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.045046 4897 generic.go:334] "Generic (PLEG): container finished" podID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerID="afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe" exitCode=0 Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.045181 4897 generic.go:334] "Generic (PLEG): container finished" podID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerID="3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009" exitCode=143 Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.045133 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.045154 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a004b575-3521-45fd-84d2-9c2c46cac69a","Type":"ContainerDied","Data":"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe"} Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.046040 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a004b575-3521-45fd-84d2-9c2c46cac69a","Type":"ContainerDied","Data":"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009"} Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.046056 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a004b575-3521-45fd-84d2-9c2c46cac69a","Type":"ContainerDied","Data":"d16823d2076a7b771a087eb82bc3bda5e2226615e2fcd1a4ace4e255e23d438b"} Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.046080 4897 scope.go:117] "RemoveContainer" containerID="afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.047041 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b7cd74f9-xphhh" event={"ID":"cfe88e43-2315-4773-85fa-459dab7fb23d","Type":"ContainerStarted","Data":"dfbb48b09d34ffb456517cd109d97c1dcbf9c911c2905a8db6c041b8f426e89a"} Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.049179 4897 generic.go:334] "Generic (PLEG): container finished" podID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerID="2fc3fb7a660268704953fa4bd24b93db1256492df8f5818ec8132b76f2ceb191" exitCode=0 Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.050590 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb67c45d-s75qr" event={"ID":"6102738c-6c77-48c6-87e1-67853cf8ce43","Type":"ContainerDied","Data":"2fc3fb7a660268704953fa4bd24b93db1256492df8f5818ec8132b76f2ceb191"} Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.115410 4897 scope.go:117] "RemoveContainer" containerID="3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.126736 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74cc48945b-m8vv6"] Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.127025 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74cc48945b-m8vv6" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerName="barbican-api-log" containerID="cri-o://79d0ded566fcb10c322d4dd8298598418e4b7ed3acf7ac070dbd3e2c2e3592b1" gracePeriod=30 Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.127192 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74cc48945b-m8vv6" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerName="barbican-api" containerID="cri-o://f28af7de6664087fc94324df7fe2eb399dbad90a716dbe5db62639a3715c0f1b" gracePeriod=30 Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.156777 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.189566 4897 scope.go:117] "RemoveContainer" containerID="afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe" Feb 28 13:38:50 crc kubenswrapper[4897]: E0228 13:38:50.192466 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe\": container with ID starting with afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe not found: ID does not exist" containerID="afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.192521 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe"} err="failed to get container status \"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe\": rpc error: code = NotFound desc = could not find container \"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe\": container with ID starting with afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe not found: ID does not exist" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.192551 4897 scope.go:117] "RemoveContainer" containerID="3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009" Feb 28 13:38:50 crc kubenswrapper[4897]: E0228 13:38:50.195776 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009\": container with ID starting with 3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009 not found: ID does not exist" containerID="3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.195826 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009"} err="failed to get container status \"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009\": rpc error: code = NotFound desc = could not find container \"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009\": container with ID starting with 3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009 not found: ID does not exist" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.195855 4897 scope.go:117] "RemoveContainer" containerID="afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.199496 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe"} err="failed to get container status \"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe\": rpc error: code = NotFound desc = could not find container \"afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe\": container with ID starting with afeeb25a0c7026d1fd79cbb33761bbfe81ade4b2eaf5756d2dcf8f7dc70051fe not found: ID does not exist" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.199538 4897 scope.go:117] "RemoveContainer" containerID="3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.199632 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.203079 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009"} err="failed to get container status \"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009\": rpc error: code = NotFound desc = could not find container \"3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009\": container with ID starting with 3eae7b54b2b78ca9c900381aa9c5e174ad401ba8a9ba9a8bf6497efa5fc18009 not found: ID does not exist" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.227905 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:50 crc kubenswrapper[4897]: E0228 13:38:50.228365 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerName="cinder-api" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.228378 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerName="cinder-api" Feb 28 13:38:50 crc kubenswrapper[4897]: E0228 13:38:50.228387 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerName="cinder-api-log" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.228393 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerName="cinder-api-log" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.228587 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerName="cinder-api" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.228605 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" containerName="cinder-api-log" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.229203 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-f69d796b5-nrscn" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.174:9696/\": dial tcp 10.217.0.174:9696: connect: connection refused" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.229680 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.233955 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.234032 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.234108 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.240961 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332370 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-config-data\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332456 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdkqh\" (UniqueName: \"kubernetes.io/projected/500bdde3-9ae3-4829-8cee-5e85a7c218a9-kube-api-access-vdkqh\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332479 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332496 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-scripts\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332515 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332530 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332556 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332576 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/500bdde3-9ae3-4829-8cee-5e85a7c218a9-logs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.332608 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/500bdde3-9ae3-4829-8cee-5e85a7c218a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439374 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439413 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439446 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439471 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/500bdde3-9ae3-4829-8cee-5e85a7c218a9-logs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439507 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/500bdde3-9ae3-4829-8cee-5e85a7c218a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439575 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-config-data\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439634 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdkqh\" (UniqueName: \"kubernetes.io/projected/500bdde3-9ae3-4829-8cee-5e85a7c218a9-kube-api-access-vdkqh\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439656 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.439671 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-scripts\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.443883 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/500bdde3-9ae3-4829-8cee-5e85a7c218a9-logs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.443940 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/500bdde3-9ae3-4829-8cee-5e85a7c218a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.450688 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.450903 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.454061 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.454535 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-scripts\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.455144 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-config-data\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.455835 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500bdde3-9ae3-4829-8cee-5e85a7c218a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.472935 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdkqh\" (UniqueName: \"kubernetes.io/projected/500bdde3-9ae3-4829-8cee-5e85a7c218a9-kube-api-access-vdkqh\") pod \"cinder-api-0\" (UID: \"500bdde3-9ae3-4829-8cee-5e85a7c218a9\") " pod="openstack/cinder-api-0" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.478616 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a004b575-3521-45fd-84d2-9c2c46cac69a" path="/var/lib/kubelet/pods/a004b575-3521-45fd-84d2-9c2c46cac69a/volumes" Feb 28 13:38:50 crc kubenswrapper[4897]: I0228 13:38:50.575831 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.061624 4897 generic.go:334] "Generic (PLEG): container finished" podID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerID="79d0ded566fcb10c322d4dd8298598418e4b7ed3acf7ac070dbd3e2c2e3592b1" exitCode=143 Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.061713 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cc48945b-m8vv6" event={"ID":"cff2212b-2ce8-42ab-85b4-6d4d9789c14b","Type":"ContainerDied","Data":"79d0ded566fcb10c322d4dd8298598418e4b7ed3acf7ac070dbd3e2c2e3592b1"} Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.067578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b7cd74f9-xphhh" event={"ID":"cfe88e43-2315-4773-85fa-459dab7fb23d","Type":"ContainerStarted","Data":"24bf353a043cc7ba7646a0bd8094352710ebebf868ae110eb5abb5ccba8e3ab5"} Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.067767 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b7cd74f9-xphhh" event={"ID":"cfe88e43-2315-4773-85fa-459dab7fb23d","Type":"ContainerStarted","Data":"762fdb941fe2d660d1afe3cbd3ca894c9d2c3199733b3353a842672c985e5482"} Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.067817 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.092555 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59b7cd74f9-xphhh" podStartSLOduration=3.092535373 podStartE2EDuration="3.092535373s" podCreationTimestamp="2026-02-28 13:38:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:51.090232903 +0000 UTC m=+1345.332553560" watchObservedRunningTime="2026-02-28 13:38:51.092535373 +0000 UTC m=+1345.334856030" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.233154 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.634884 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.675085 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-combined-ca-bundle\") pod \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.675149 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc9gl\" (UniqueName: \"kubernetes.io/projected/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-kube-api-access-dc9gl\") pod \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.675176 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-internal-tls-certs\") pod \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.675260 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-config\") pod \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.675383 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-ovndb-tls-certs\") pod \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.675401 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-httpd-config\") pod \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.675485 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-public-tls-certs\") pod \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\" (UID: \"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5\") " Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.685570 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-kube-api-access-dc9gl" (OuterVolumeSpecName: "kube-api-access-dc9gl") pod "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" (UID: "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5"). InnerVolumeSpecName "kube-api-access-dc9gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.685883 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" (UID: "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.734180 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" (UID: "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.760364 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" (UID: "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.770316 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" (UID: "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.775619 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-config" (OuterVolumeSpecName: "config") pod "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" (UID: "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.777948 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.777989 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc9gl\" (UniqueName: \"kubernetes.io/projected/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-kube-api-access-dc9gl\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.778005 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.778016 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.778028 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.778038 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.787039 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" (UID: "5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:51 crc kubenswrapper[4897]: I0228 13:38:51.880065 4897 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.083628 4897 generic.go:334] "Generic (PLEG): container finished" podID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerID="f28af7de6664087fc94324df7fe2eb399dbad90a716dbe5db62639a3715c0f1b" exitCode=0 Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.083684 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cc48945b-m8vv6" event={"ID":"cff2212b-2ce8-42ab-85b4-6d4d9789c14b","Type":"ContainerDied","Data":"f28af7de6664087fc94324df7fe2eb399dbad90a716dbe5db62639a3715c0f1b"} Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.088202 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"500bdde3-9ae3-4829-8cee-5e85a7c218a9","Type":"ContainerStarted","Data":"60870406bd1cf8e6b5c6971cfff46031e08e489bece39c508da9bdd2a758d52e"} Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.103688 4897 generic.go:334] "Generic (PLEG): container finished" podID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerID="3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de" exitCode=0 Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.103755 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f69d796b5-nrscn" event={"ID":"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5","Type":"ContainerDied","Data":"3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de"} Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.103775 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f69d796b5-nrscn" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.103809 4897 scope.go:117] "RemoveContainer" containerID="23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.103795 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f69d796b5-nrscn" event={"ID":"5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5","Type":"ContainerDied","Data":"82ec60af186739b6a58d2b22c114c80d8ee2a0bbf4d170076bb8711d3be64303"} Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.118147 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fb67c45d-s75qr" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.274302 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.284673 4897 scope.go:117] "RemoveContainer" containerID="3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.292083 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f69d796b5-nrscn"] Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.305879 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f69d796b5-nrscn"] Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.309750 4897 scope.go:117] "RemoveContainer" containerID="23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8" Feb 28 13:38:52 crc kubenswrapper[4897]: E0228 13:38:52.310133 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8\": container with ID starting with 23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8 not found: ID does not exist" containerID="23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.310166 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8"} err="failed to get container status \"23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8\": rpc error: code = NotFound desc = could not find container \"23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8\": container with ID starting with 23603a0483aa0055c1819b9fe7b8a0e06443ac977b7119323f59b08475d6dea8 not found: ID does not exist" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.310184 4897 scope.go:117] "RemoveContainer" containerID="3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de" Feb 28 13:38:52 crc kubenswrapper[4897]: E0228 13:38:52.310565 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de\": container with ID starting with 3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de not found: ID does not exist" containerID="3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.310584 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de"} err="failed to get container status \"3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de\": rpc error: code = NotFound desc = could not find container \"3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de\": container with ID starting with 3eebac88a33785fae6a02302def0e90cc3817137ba470bb437e8d35d226997de not found: ID does not exist" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.388728 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-logs\") pod \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.388930 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2c2w\" (UniqueName: \"kubernetes.io/projected/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-kube-api-access-n2c2w\") pod \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.388983 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-combined-ca-bundle\") pod \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.389009 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data\") pod \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.389068 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data-custom\") pod \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\" (UID: \"cff2212b-2ce8-42ab-85b4-6d4d9789c14b\") " Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.390074 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-logs" (OuterVolumeSpecName: "logs") pod "cff2212b-2ce8-42ab-85b4-6d4d9789c14b" (UID: "cff2212b-2ce8-42ab-85b4-6d4d9789c14b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.393456 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cff2212b-2ce8-42ab-85b4-6d4d9789c14b" (UID: "cff2212b-2ce8-42ab-85b4-6d4d9789c14b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.393944 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-kube-api-access-n2c2w" (OuterVolumeSpecName: "kube-api-access-n2c2w") pod "cff2212b-2ce8-42ab-85b4-6d4d9789c14b" (UID: "cff2212b-2ce8-42ab-85b4-6d4d9789c14b"). InnerVolumeSpecName "kube-api-access-n2c2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.417766 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cff2212b-2ce8-42ab-85b4-6d4d9789c14b" (UID: "cff2212b-2ce8-42ab-85b4-6d4d9789c14b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.453893 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data" (OuterVolumeSpecName: "config-data") pod "cff2212b-2ce8-42ab-85b4-6d4d9789c14b" (UID: "cff2212b-2ce8-42ab-85b4-6d4d9789c14b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.456956 4897 scope.go:117] "RemoveContainer" containerID="0dc18714380303e1cafd477176a5042930839874559895094e8ed71f336ecd95" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.483562 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" path="/var/lib/kubelet/pods/5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5/volumes" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.491060 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2c2w\" (UniqueName: \"kubernetes.io/projected/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-kube-api-access-n2c2w\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.491086 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.491096 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.491104 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:52 crc kubenswrapper[4897]: I0228 13:38:52.491112 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cff2212b-2ce8-42ab-85b4-6d4d9789c14b-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.421484 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerStarted","Data":"02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530"} Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.433212 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74cc48945b-m8vv6" event={"ID":"cff2212b-2ce8-42ab-85b4-6d4d9789c14b","Type":"ContainerDied","Data":"28d009f1c31acbfcc0baa2d910ef19bdaaee9f6c8c556c2ddd196290c50e6d17"} Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.433271 4897 scope.go:117] "RemoveContainer" containerID="f28af7de6664087fc94324df7fe2eb399dbad90a716dbe5db62639a3715c0f1b" Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.433466 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74cc48945b-m8vv6" Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.439683 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"500bdde3-9ae3-4829-8cee-5e85a7c218a9","Type":"ContainerStarted","Data":"13f0f109569636084491b440d09ad3402ede458e792ce3fe003833e9a09665a2"} Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.439732 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.464418 4897 scope.go:117] "RemoveContainer" containerID="79d0ded566fcb10c322d4dd8298598418e4b7ed3acf7ac070dbd3e2c2e3592b1" Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.484002 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74cc48945b-m8vv6"] Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.497189 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.497172943 podStartE2EDuration="3.497172943s" podCreationTimestamp="2026-02-28 13:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:38:53.483885242 +0000 UTC m=+1347.726205909" watchObservedRunningTime="2026-02-28 13:38:53.497172943 +0000 UTC m=+1347.739493600" Feb 28 13:38:53 crc kubenswrapper[4897]: I0228 13:38:53.499156 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74cc48945b-m8vv6"] Feb 28 13:38:54 crc kubenswrapper[4897]: I0228 13:38:54.490026 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" path="/var/lib/kubelet/pods/cff2212b-2ce8-42ab-85b4-6d4d9789c14b/volumes" Feb 28 13:38:54 crc kubenswrapper[4897]: I0228 13:38:54.491663 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"500bdde3-9ae3-4829-8cee-5e85a7c218a9","Type":"ContainerStarted","Data":"dcbe0801d264c1a90143e96114b349d034a208c211119d2c22bfaab12665ad8b"} Feb 28 13:38:54 crc kubenswrapper[4897]: I0228 13:38:54.595026 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 28 13:38:54 crc kubenswrapper[4897]: I0228 13:38:54.666739 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:54 crc kubenswrapper[4897]: I0228 13:38:54.709562 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:38:54 crc kubenswrapper[4897]: I0228 13:38:54.778158 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6ff458d7-pwrsk"] Feb 28 13:38:54 crc kubenswrapper[4897]: I0228 13:38:54.778420 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" podUID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" containerName="dnsmasq-dns" containerID="cri-o://c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0" gracePeriod=10 Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.347694 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.469492 4897 generic.go:334] "Generic (PLEG): container finished" podID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" containerID="c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0" exitCode=0 Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.469573 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.469576 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" event={"ID":"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229","Type":"ContainerDied","Data":"c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0"} Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.469652 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6ff458d7-pwrsk" event={"ID":"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229","Type":"ContainerDied","Data":"1d002e1ba3bec9680784a9894beba297ff838aee2b3039214c799b9cd01fb818"} Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.469669 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerName="cinder-scheduler" containerID="cri-o://955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d" gracePeriod=30 Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.469682 4897 scope.go:117] "RemoveContainer" containerID="c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.469746 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerName="probe" containerID="cri-o://8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6" gracePeriod=30 Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.494304 4897 scope.go:117] "RemoveContainer" containerID="79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.517996 4897 scope.go:117] "RemoveContainer" containerID="c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0" Feb 28 13:38:55 crc kubenswrapper[4897]: E0228 13:38:55.518433 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0\": container with ID starting with c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0 not found: ID does not exist" containerID="c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.518468 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0"} err="failed to get container status \"c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0\": rpc error: code = NotFound desc = could not find container \"c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0\": container with ID starting with c369d48695251a12a1e4226d870836833d3a2baec53e33d84158811f1de518f0 not found: ID does not exist" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.518494 4897 scope.go:117] "RemoveContainer" containerID="79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb" Feb 28 13:38:55 crc kubenswrapper[4897]: E0228 13:38:55.518861 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb\": container with ID starting with 79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb not found: ID does not exist" containerID="79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.518893 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb"} err="failed to get container status \"79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb\": rpc error: code = NotFound desc = could not find container \"79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb\": container with ID starting with 79879f7d23f947cf3f1cecd5f161ce9440e00a31303207fb4091bd4773903cbb not found: ID does not exist" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.551108 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-sb\") pod \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.551174 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk6tw\" (UniqueName: \"kubernetes.io/projected/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-kube-api-access-dk6tw\") pod \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.551210 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-svc\") pod \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.551232 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-nb\") pod \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.552092 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-config\") pod \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.552167 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-swift-storage-0\") pod \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\" (UID: \"5b191fe6-5a13-4d4b-a98e-77e8cc5f5229\") " Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.557359 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-kube-api-access-dk6tw" (OuterVolumeSpecName: "kube-api-access-dk6tw") pod "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" (UID: "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229"). InnerVolumeSpecName "kube-api-access-dk6tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.626047 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" (UID: "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.626533 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" (UID: "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.626507 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" (UID: "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.629759 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" (UID: "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.630108 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-config" (OuterVolumeSpecName: "config") pod "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" (UID: "5b191fe6-5a13-4d4b-a98e-77e8cc5f5229"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.653499 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.653533 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.653543 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.653551 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk6tw\" (UniqueName: \"kubernetes.io/projected/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-kube-api-access-dk6tw\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.653561 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.653570 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.857955 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6ff458d7-pwrsk"] Feb 28 13:38:55 crc kubenswrapper[4897]: I0228 13:38:55.867708 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6ff458d7-pwrsk"] Feb 28 13:38:56 crc kubenswrapper[4897]: I0228 13:38:56.478190 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" path="/var/lib/kubelet/pods/5b191fe6-5a13-4d4b-a98e-77e8cc5f5229/volumes" Feb 28 13:38:56 crc kubenswrapper[4897]: I0228 13:38:56.507576 4897 generic.go:334] "Generic (PLEG): container finished" podID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerID="02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530" exitCode=1 Feb 28 13:38:56 crc kubenswrapper[4897]: I0228 13:38:56.507741 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerDied","Data":"02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530"} Feb 28 13:38:56 crc kubenswrapper[4897]: I0228 13:38:56.507781 4897 scope.go:117] "RemoveContainer" containerID="0dc18714380303e1cafd477176a5042930839874559895094e8ed71f336ecd95" Feb 28 13:38:56 crc kubenswrapper[4897]: I0228 13:38:56.509655 4897 scope.go:117] "RemoveContainer" containerID="02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530" Feb 28 13:38:56 crc kubenswrapper[4897]: E0228 13:38:56.510130 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2b88f822-8f2a-473a-b388-b144a37ba4f0)\"" pod="openstack/watcher-decision-engine-0" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" Feb 28 13:38:56 crc kubenswrapper[4897]: I0228 13:38:56.518076 4897 generic.go:334] "Generic (PLEG): container finished" podID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerID="8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6" exitCode=0 Feb 28 13:38:56 crc kubenswrapper[4897]: I0228 13:38:56.518177 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7bd80a3-8929-44e4-b11e-3daebb9c7f54","Type":"ContainerDied","Data":"8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6"} Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.427091 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.535679 4897 generic.go:334] "Generic (PLEG): container finished" podID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerID="955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d" exitCode=0 Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.535718 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7bd80a3-8929-44e4-b11e-3daebb9c7f54","Type":"ContainerDied","Data":"955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d"} Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.535743 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7bd80a3-8929-44e4-b11e-3daebb9c7f54","Type":"ContainerDied","Data":"e3028451b6910af1e00f4af7c9921d634304c1cf9ab9e150ad7c051bb6ddf046"} Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.535760 4897 scope.go:117] "RemoveContainer" containerID="8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.535877 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.562176 4897 scope.go:117] "RemoveContainer" containerID="955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.587166 4897 scope.go:117] "RemoveContainer" containerID="8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.587781 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6\": container with ID starting with 8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6 not found: ID does not exist" containerID="8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.587825 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6"} err="failed to get container status \"8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6\": rpc error: code = NotFound desc = could not find container \"8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6\": container with ID starting with 8eb98b688eba08622814e0628eccdda63eb0cd65fe57b425086596816f09cba6 not found: ID does not exist" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.587868 4897 scope.go:117] "RemoveContainer" containerID="955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.588276 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d\": container with ID starting with 955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d not found: ID does not exist" containerID="955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.588331 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d"} err="failed to get container status \"955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d\": rpc error: code = NotFound desc = could not find container \"955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d\": container with ID starting with 955e4ca8ce72b283046730e5a7fd28286923110a12bdffd80e5b5e5fdaa2f13d not found: ID does not exist" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.592264 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r886r\" (UniqueName: \"kubernetes.io/projected/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-kube-api-access-r886r\") pod \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.592329 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-etc-machine-id\") pod \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.592375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-combined-ca-bundle\") pod \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.592641 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-scripts\") pod \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.592672 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data-custom\") pod \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.592710 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data\") pod \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\" (UID: \"d7bd80a3-8929-44e4-b11e-3daebb9c7f54\") " Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.594542 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d7bd80a3-8929-44e4-b11e-3daebb9c7f54" (UID: "d7bd80a3-8929-44e4-b11e-3daebb9c7f54"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.604466 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-kube-api-access-r886r" (OuterVolumeSpecName: "kube-api-access-r886r") pod "d7bd80a3-8929-44e4-b11e-3daebb9c7f54" (UID: "d7bd80a3-8929-44e4-b11e-3daebb9c7f54"). InnerVolumeSpecName "kube-api-access-r886r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.604531 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-scripts" (OuterVolumeSpecName: "scripts") pod "d7bd80a3-8929-44e4-b11e-3daebb9c7f54" (UID: "d7bd80a3-8929-44e4-b11e-3daebb9c7f54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.605715 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d7bd80a3-8929-44e4-b11e-3daebb9c7f54" (UID: "d7bd80a3-8929-44e4-b11e-3daebb9c7f54"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.659504 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7bd80a3-8929-44e4-b11e-3daebb9c7f54" (UID: "d7bd80a3-8929-44e4-b11e-3daebb9c7f54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.695425 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.695461 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.695470 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.695479 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r886r\" (UniqueName: \"kubernetes.io/projected/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-kube-api-access-r886r\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.695489 4897 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.717896 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data" (OuterVolumeSpecName: "config-data") pod "d7bd80a3-8929-44e4-b11e-3daebb9c7f54" (UID: "d7bd80a3-8929-44e4-b11e-3daebb9c7f54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.797603 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bd80a3-8929-44e4-b11e-3daebb9c7f54-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.870387 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.885986 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901425 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.901784 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerName="barbican-api" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901799 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerName="barbican-api" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.901815 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" containerName="dnsmasq-dns" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901822 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" containerName="dnsmasq-dns" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.901835 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerName="probe" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901841 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerName="probe" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.901858 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerName="cinder-scheduler" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901864 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerName="cinder-scheduler" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.901879 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-httpd" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901886 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-httpd" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.901897 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" containerName="init" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901902 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" containerName="init" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.901910 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-api" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901924 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-api" Feb 28 13:38:57 crc kubenswrapper[4897]: E0228 13:38:57.901933 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerName="barbican-api-log" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.901938 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerName="barbican-api-log" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.902113 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-api" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.902134 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerName="barbican-api-log" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.902149 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerName="cinder-scheduler" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.902156 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd4f6a6-5d8b-4f37-9740-bf96116a7bc5" containerName="neutron-httpd" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.902168 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff2212b-2ce8-42ab-85b4-6d4d9789c14b" containerName="barbican-api" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.902177 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" containerName="probe" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.902189 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b191fe6-5a13-4d4b-a98e-77e8cc5f5229" containerName="dnsmasq-dns" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.903211 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.913796 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 28 13:38:57 crc kubenswrapper[4897]: I0228 13:38:57.922061 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.001497 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.001560 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.001594 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.001667 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.001782 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0bef6c5-aed5-464c-8518-9be02ba3cb86-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.001841 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjvb\" (UniqueName: \"kubernetes.io/projected/b0bef6c5-aed5-464c-8518-9be02ba3cb86-kube-api-access-9xjvb\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.103977 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0bef6c5-aed5-464c-8518-9be02ba3cb86-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.104088 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xjvb\" (UniqueName: \"kubernetes.io/projected/b0bef6c5-aed5-464c-8518-9be02ba3cb86-kube-api-access-9xjvb\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.104154 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.104162 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0bef6c5-aed5-464c-8518-9be02ba3cb86-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.104194 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.104261 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.104379 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.110621 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.111360 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.112131 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.114052 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bef6c5-aed5-464c-8518-9be02ba3cb86-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.126910 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xjvb\" (UniqueName: \"kubernetes.io/projected/b0bef6c5-aed5-464c-8518-9be02ba3cb86-kube-api-access-9xjvb\") pod \"cinder-scheduler-0\" (UID: \"b0bef6c5-aed5-464c-8518-9be02ba3cb86\") " pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.266793 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.469681 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7bd80a3-8929-44e4-b11e-3daebb9c7f54" path="/var/lib/kubelet/pods/d7bd80a3-8929-44e4-b11e-3daebb9c7f54/volumes" Feb 28 13:38:58 crc kubenswrapper[4897]: E0228 13:38:58.593739 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 28 13:38:58 crc kubenswrapper[4897]: E0228 13:38:58.593955 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:38:58 crc kubenswrapper[4897]: E0228 13:38:58.595210 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\", failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"]" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:38:58 crc kubenswrapper[4897]: I0228 13:38:58.752422 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 13:38:58 crc kubenswrapper[4897]: W0228 13:38:58.756682 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0bef6c5_aed5_464c_8518_9be02ba3cb86.slice/crio-70e3a986fce39cef7b14643c560ee2c8d8170895048121e5093c71ddf63a73c9 WatchSource:0}: Error finding container 70e3a986fce39cef7b14643c560ee2c8d8170895048121e5093c71ddf63a73c9: Status 404 returned error can't find the container with id 70e3a986fce39cef7b14643c560ee2c8d8170895048121e5093c71ddf63a73c9 Feb 28 13:38:59 crc kubenswrapper[4897]: I0228 13:38:59.131913 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-d5c8f94c5-9sc2w" Feb 28 13:38:59 crc kubenswrapper[4897]: E0228 13:38:59.458783 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:38:59 crc kubenswrapper[4897]: I0228 13:38:59.507888 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:59 crc kubenswrapper[4897]: I0228 13:38:59.508241 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:59 crc kubenswrapper[4897]: I0228 13:38:59.508264 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:59 crc kubenswrapper[4897]: I0228 13:38:59.508283 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:38:59 crc kubenswrapper[4897]: I0228 13:38:59.509332 4897 scope.go:117] "RemoveContainer" containerID="02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530" Feb 28 13:38:59 crc kubenswrapper[4897]: E0228 13:38:59.509723 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2b88f822-8f2a-473a-b388-b144a37ba4f0)\"" pod="openstack/watcher-decision-engine-0" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" Feb 28 13:38:59 crc kubenswrapper[4897]: I0228 13:38:59.564780 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0bef6c5-aed5-464c-8518-9be02ba3cb86","Type":"ContainerStarted","Data":"b892275278f7a88ed8055a11113210e2f911be8e756cc5863450460ef3e8a43e"} Feb 28 13:38:59 crc kubenswrapper[4897]: I0228 13:38:59.564821 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0bef6c5-aed5-464c-8518-9be02ba3cb86","Type":"ContainerStarted","Data":"70e3a986fce39cef7b14643c560ee2c8d8170895048121e5093c71ddf63a73c9"} Feb 28 13:39:00 crc kubenswrapper[4897]: I0228 13:39:00.578200 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0bef6c5-aed5-464c-8518-9be02ba3cb86","Type":"ContainerStarted","Data":"f049d0e74e286477148f61bad6f749a1f6c578f7d1641ee3918770c6319f1a35"} Feb 28 13:39:00 crc kubenswrapper[4897]: I0228 13:39:00.605379 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.605361337 podStartE2EDuration="3.605361337s" podCreationTimestamp="2026-02-28 13:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:00.599104656 +0000 UTC m=+1354.841425333" watchObservedRunningTime="2026-02-28 13:39:00.605361337 +0000 UTC m=+1354.847681984" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.113094 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.114857 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.116491 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.116910 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fb67c45d-s75qr" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.117734 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-z66zj" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.119191 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.122595 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.179321 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zd84\" (UniqueName: \"kubernetes.io/projected/2910518a-9b98-499b-a132-954899d270c0-kube-api-access-7zd84\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.179385 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-openstack-config-secret\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.179461 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2910518a-9b98-499b-a132-954899d270c0-openstack-config\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.179482 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.281689 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zd84\" (UniqueName: \"kubernetes.io/projected/2910518a-9b98-499b-a132-954899d270c0-kube-api-access-7zd84\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.281764 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-openstack-config-secret\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.281823 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2910518a-9b98-499b-a132-954899d270c0-openstack-config\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.281862 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.283442 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2910518a-9b98-499b-a132-954899d270c0-openstack-config\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.293424 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-openstack-config-secret\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.300528 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.306811 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zd84\" (UniqueName: \"kubernetes.io/projected/2910518a-9b98-499b-a132-954899d270c0-kube-api-access-7zd84\") pod \"openstackclient\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " pod="openstack/openstackclient" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.420589 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.422339 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-778b749bdb-bmqwf" Feb 28 13:39:02 crc kubenswrapper[4897]: I0228 13:39:02.431618 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 13:39:03 crc kubenswrapper[4897]: I0228 13:39:03.061648 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 13:39:03 crc kubenswrapper[4897]: W0228 13:39:03.065598 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2910518a_9b98_499b_a132_954899d270c0.slice/crio-95610761ed527b0bb8dd8d13c51fd8ebc68fcf3c119ba29c7334e54bff16db20 WatchSource:0}: Error finding container 95610761ed527b0bb8dd8d13c51fd8ebc68fcf3c119ba29c7334e54bff16db20: Status 404 returned error can't find the container with id 95610761ed527b0bb8dd8d13c51fd8ebc68fcf3c119ba29c7334e54bff16db20 Feb 28 13:39:03 crc kubenswrapper[4897]: I0228 13:39:03.066916 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 28 13:39:03 crc kubenswrapper[4897]: I0228 13:39:03.267479 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 28 13:39:03 crc kubenswrapper[4897]: I0228 13:39:03.370704 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:39:03 crc kubenswrapper[4897]: I0228 13:39:03.370765 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:39:03 crc kubenswrapper[4897]: I0228 13:39:03.613679 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2910518a-9b98-499b-a132-954899d270c0","Type":"ContainerStarted","Data":"95610761ed527b0bb8dd8d13c51fd8ebc68fcf3c119ba29c7334e54bff16db20"} Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.760188 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.803177 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7765f74f9-bjr4m"] Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.804993 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.808234 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.808424 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.809033 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.816345 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7765f74f9-bjr4m"] Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.898815 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgpm5\" (UniqueName: \"kubernetes.io/projected/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-kube-api-access-qgpm5\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.898900 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-log-httpd\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.898946 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-run-httpd\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.898981 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-public-tls-certs\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.899072 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-combined-ca-bundle\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.899108 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-internal-tls-certs\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.899132 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-etc-swift\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:06 crc kubenswrapper[4897]: I0228 13:39:06.899168 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-config-data\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.001452 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgpm5\" (UniqueName: \"kubernetes.io/projected/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-kube-api-access-qgpm5\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.001541 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-log-httpd\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.001566 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-run-httpd\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.001600 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-public-tls-certs\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.001679 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-combined-ca-bundle\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.002058 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-log-httpd\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.002204 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-run-httpd\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.002622 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-internal-tls-certs\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.002675 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-etc-swift\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.002723 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-config-data\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.009169 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-internal-tls-certs\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.009510 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-etc-swift\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.010222 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-public-tls-certs\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.011349 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-combined-ca-bundle\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.012143 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-config-data\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.028375 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgpm5\" (UniqueName: \"kubernetes.io/projected/2ea92bb0-3068-4ffe-b85c-ce041cc1911e-kube-api-access-qgpm5\") pod \"swift-proxy-7765f74f9-bjr4m\" (UID: \"2ea92bb0-3068-4ffe-b85c-ce041cc1911e\") " pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:07 crc kubenswrapper[4897]: I0228 13:39:07.135886 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:08 crc kubenswrapper[4897]: I0228 13:39:08.466378 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.331172 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-f8445"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.332631 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.351406 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-f8445"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.437649 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4v8r2"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.439353 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.474786 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4a08db-9a69-41de-b77c-5ebeb255cd29-operator-scripts\") pod \"nova-api-db-create-f8445\" (UID: \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\") " pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.474890 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2cnx\" (UniqueName: \"kubernetes.io/projected/8c4a08db-9a69-41de-b77c-5ebeb255cd29-kube-api-access-x2cnx\") pod \"nova-api-db-create-f8445\" (UID: \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\") " pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.482620 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4v8r2"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.482652 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-297f-account-create-update-2g97b"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.483886 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.506947 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.516798 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-297f-account-create-update-2g97b"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.576868 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqlp5\" (UniqueName: \"kubernetes.io/projected/50f458e2-efe0-49ba-8fa3-135d3673b9a7-kube-api-access-sqlp5\") pod \"nova-api-297f-account-create-update-2g97b\" (UID: \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\") " pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.576907 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f458e2-efe0-49ba-8fa3-135d3673b9a7-operator-scripts\") pod \"nova-api-297f-account-create-update-2g97b\" (UID: \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\") " pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.576999 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlks6\" (UniqueName: \"kubernetes.io/projected/550d213b-7a35-4053-8364-e78d03f794ca-kube-api-access-dlks6\") pod \"nova-cell0-db-create-4v8r2\" (UID: \"550d213b-7a35-4053-8364-e78d03f794ca\") " pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.577035 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4a08db-9a69-41de-b77c-5ebeb255cd29-operator-scripts\") pod \"nova-api-db-create-f8445\" (UID: \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\") " pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.577188 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2cnx\" (UniqueName: \"kubernetes.io/projected/8c4a08db-9a69-41de-b77c-5ebeb255cd29-kube-api-access-x2cnx\") pod \"nova-api-db-create-f8445\" (UID: \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\") " pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.577244 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d213b-7a35-4053-8364-e78d03f794ca-operator-scripts\") pod \"nova-cell0-db-create-4v8r2\" (UID: \"550d213b-7a35-4053-8364-e78d03f794ca\") " pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.578442 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4a08db-9a69-41de-b77c-5ebeb255cd29-operator-scripts\") pod \"nova-api-db-create-f8445\" (UID: \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\") " pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.603898 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2cnx\" (UniqueName: \"kubernetes.io/projected/8c4a08db-9a69-41de-b77c-5ebeb255cd29-kube-api-access-x2cnx\") pod \"nova-api-db-create-f8445\" (UID: \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\") " pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.660004 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lfprr"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.661569 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.661599 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.675642 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c071-account-create-update-4cxfk"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.677177 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.678488 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqlp5\" (UniqueName: \"kubernetes.io/projected/50f458e2-efe0-49ba-8fa3-135d3673b9a7-kube-api-access-sqlp5\") pod \"nova-api-297f-account-create-update-2g97b\" (UID: \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\") " pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.678530 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f458e2-efe0-49ba-8fa3-135d3673b9a7-operator-scripts\") pod \"nova-api-297f-account-create-update-2g97b\" (UID: \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\") " pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.678599 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlks6\" (UniqueName: \"kubernetes.io/projected/550d213b-7a35-4053-8364-e78d03f794ca-kube-api-access-dlks6\") pod \"nova-cell0-db-create-4v8r2\" (UID: \"550d213b-7a35-4053-8364-e78d03f794ca\") " pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.678734 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d213b-7a35-4053-8364-e78d03f794ca-operator-scripts\") pod \"nova-cell0-db-create-4v8r2\" (UID: \"550d213b-7a35-4053-8364-e78d03f794ca\") " pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.679405 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d213b-7a35-4053-8364-e78d03f794ca-operator-scripts\") pod \"nova-cell0-db-create-4v8r2\" (UID: \"550d213b-7a35-4053-8364-e78d03f794ca\") " pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.679441 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f458e2-efe0-49ba-8fa3-135d3673b9a7-operator-scripts\") pod \"nova-api-297f-account-create-update-2g97b\" (UID: \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\") " pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.683602 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.697594 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqlp5\" (UniqueName: \"kubernetes.io/projected/50f458e2-efe0-49ba-8fa3-135d3673b9a7-kube-api-access-sqlp5\") pod \"nova-api-297f-account-create-update-2g97b\" (UID: \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\") " pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.700903 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlks6\" (UniqueName: \"kubernetes.io/projected/550d213b-7a35-4053-8364-e78d03f794ca-kube-api-access-dlks6\") pod \"nova-cell0-db-create-4v8r2\" (UID: \"550d213b-7a35-4053-8364-e78d03f794ca\") " pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.701052 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c071-account-create-update-4cxfk"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.718946 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lfprr"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.776177 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.779943 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjb8j\" (UniqueName: \"kubernetes.io/projected/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-kube-api-access-jjb8j\") pod \"nova-cell1-db-create-lfprr\" (UID: \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\") " pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.780049 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-operator-scripts\") pod \"nova-cell0-c071-account-create-update-4cxfk\" (UID: \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\") " pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.780171 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl5dh\" (UniqueName: \"kubernetes.io/projected/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-kube-api-access-fl5dh\") pod \"nova-cell0-c071-account-create-update-4cxfk\" (UID: \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\") " pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.780216 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-operator-scripts\") pod \"nova-cell1-db-create-lfprr\" (UID: \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\") " pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.805858 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.839180 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d87b-account-create-update-znc9q"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.841279 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.842976 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.864744 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d87b-account-create-update-znc9q"] Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.881389 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl5dh\" (UniqueName: \"kubernetes.io/projected/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-kube-api-access-fl5dh\") pod \"nova-cell0-c071-account-create-update-4cxfk\" (UID: \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\") " pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.881437 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-operator-scripts\") pod \"nova-cell1-db-create-lfprr\" (UID: \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\") " pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.881493 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjb8j\" (UniqueName: \"kubernetes.io/projected/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-kube-api-access-jjb8j\") pod \"nova-cell1-db-create-lfprr\" (UID: \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\") " pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.881531 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-operator-scripts\") pod \"nova-cell0-c071-account-create-update-4cxfk\" (UID: \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\") " pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.882303 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-operator-scripts\") pod \"nova-cell0-c071-account-create-update-4cxfk\" (UID: \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\") " pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.882485 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-operator-scripts\") pod \"nova-cell1-db-create-lfprr\" (UID: \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\") " pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.898092 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjb8j\" (UniqueName: \"kubernetes.io/projected/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-kube-api-access-jjb8j\") pod \"nova-cell1-db-create-lfprr\" (UID: \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\") " pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.916201 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl5dh\" (UniqueName: \"kubernetes.io/projected/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-kube-api-access-fl5dh\") pod \"nova-cell0-c071-account-create-update-4cxfk\" (UID: \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\") " pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.983442 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-operator-scripts\") pod \"nova-cell1-d87b-account-create-update-znc9q\" (UID: \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\") " pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.983553 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtw2h\" (UniqueName: \"kubernetes.io/projected/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-kube-api-access-vtw2h\") pod \"nova-cell1-d87b-account-create-update-znc9q\" (UID: \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\") " pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:10 crc kubenswrapper[4897]: I0228 13:39:10.994552 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:11 crc kubenswrapper[4897]: I0228 13:39:11.086152 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-operator-scripts\") pod \"nova-cell1-d87b-account-create-update-znc9q\" (UID: \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\") " pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:11 crc kubenswrapper[4897]: I0228 13:39:11.086323 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtw2h\" (UniqueName: \"kubernetes.io/projected/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-kube-api-access-vtw2h\") pod \"nova-cell1-d87b-account-create-update-znc9q\" (UID: \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\") " pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:11 crc kubenswrapper[4897]: I0228 13:39:11.086945 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-operator-scripts\") pod \"nova-cell1-d87b-account-create-update-znc9q\" (UID: \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\") " pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:11 crc kubenswrapper[4897]: I0228 13:39:11.106980 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtw2h\" (UniqueName: \"kubernetes.io/projected/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-kube-api-access-vtw2h\") pod \"nova-cell1-d87b-account-create-update-znc9q\" (UID: \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\") " pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:11 crc kubenswrapper[4897]: I0228 13:39:11.111674 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:11 crc kubenswrapper[4897]: I0228 13:39:11.184873 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:11 crc kubenswrapper[4897]: I0228 13:39:11.456680 4897 scope.go:117] "RemoveContainer" containerID="02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530" Feb 28 13:39:11 crc kubenswrapper[4897]: E0228 13:39:11.457231 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2b88f822-8f2a-473a-b388-b144a37ba4f0)\"" pod="openstack/watcher-decision-engine-0" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" Feb 28 13:39:12 crc kubenswrapper[4897]: I0228 13:39:12.117697 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fb67c45d-s75qr" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 28 13:39:13 crc kubenswrapper[4897]: E0228 13:39:13.029867 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:39:13 crc kubenswrapper[4897]: I0228 13:39:13.125168 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:39:13 crc kubenswrapper[4897]: I0228 13:39:13.126037 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerName="glance-log" containerID="cri-o://d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed" gracePeriod=30 Feb 28 13:39:13 crc kubenswrapper[4897]: I0228 13:39:13.126501 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerName="glance-httpd" containerID="cri-o://d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc" gracePeriod=30 Feb 28 13:39:13 crc kubenswrapper[4897]: I0228 13:39:13.670773 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7765f74f9-bjr4m"] Feb 28 13:39:13 crc kubenswrapper[4897]: I0228 13:39:13.769023 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7765f74f9-bjr4m" event={"ID":"2ea92bb0-3068-4ffe-b85c-ce041cc1911e","Type":"ContainerStarted","Data":"60206412d19cdc43ca1cd770a7a98f80e6c41b1c17a5b7bfacc1909f0c5137bf"} Feb 28 13:39:13 crc kubenswrapper[4897]: I0228 13:39:13.777465 4897 generic.go:334] "Generic (PLEG): container finished" podID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerID="d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed" exitCode=143 Feb 28 13:39:13 crc kubenswrapper[4897]: I0228 13:39:13.777510 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9910b644-86b9-44e7-856e-4fbaf1d1a740","Type":"ContainerDied","Data":"d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.139198 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c071-account-create-update-4cxfk"] Feb 28 13:39:14 crc kubenswrapper[4897]: W0228 13:39:14.164343 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf36fdddb_f718_4f7a_bc78_1a5a543fdefe.slice/crio-ab95809703dfd50dbb27111900edb93801af294a0019968fe27aaa20816118fd WatchSource:0}: Error finding container ab95809703dfd50dbb27111900edb93801af294a0019968fe27aaa20816118fd: Status 404 returned error can't find the container with id ab95809703dfd50dbb27111900edb93801af294a0019968fe27aaa20816118fd Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.244369 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4v8r2"] Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.311619 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.311843 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerName="glance-log" containerID="cri-o://1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338" gracePeriod=30 Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.312222 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerName="glance-httpd" containerID="cri-o://899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52" gracePeriod=30 Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.373658 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-297f-account-create-update-2g97b"] Feb 28 13:39:14 crc kubenswrapper[4897]: W0228 13:39:14.429569 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f458e2_efe0_49ba_8fa3_135d3673b9a7.slice/crio-a2a9ef9cbe7a80a819f302b39a8aefe9057bd8667ac3fc6bfd27b0bb0cf35815 WatchSource:0}: Error finding container a2a9ef9cbe7a80a819f302b39a8aefe9057bd8667ac3fc6bfd27b0bb0cf35815: Status 404 returned error can't find the container with id a2a9ef9cbe7a80a819f302b39a8aefe9057bd8667ac3fc6bfd27b0bb0cf35815 Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.514375 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-f8445"] Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.514657 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lfprr"] Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.536784 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d87b-account-create-update-znc9q"] Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.798181 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f8445" event={"ID":"8c4a08db-9a69-41de-b77c-5ebeb255cd29","Type":"ContainerStarted","Data":"5b356112ada1a981ffb82d6d328980f673add9f72bb47ebbbdacb4f855293980"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.809058 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" event={"ID":"f36fdddb-f718-4f7a-bc78-1a5a543fdefe","Type":"ContainerStarted","Data":"5bb15e400aa68a527c5bf52ef1421e2ffa7853c527c7de11d4cec78fd9e45ec3"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.809123 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" event={"ID":"f36fdddb-f718-4f7a-bc78-1a5a543fdefe","Type":"ContainerStarted","Data":"ab95809703dfd50dbb27111900edb93801af294a0019968fe27aaa20816118fd"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.823625 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7765f74f9-bjr4m" event={"ID":"2ea92bb0-3068-4ffe-b85c-ce041cc1911e","Type":"ContainerStarted","Data":"6fb6ff8861da9a61a02494ab3b393cf5faed8b1c23ada5abb5c0b907881102b0"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.823843 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7765f74f9-bjr4m" event={"ID":"2ea92bb0-3068-4ffe-b85c-ce041cc1911e","Type":"ContainerStarted","Data":"2c8d12073b90162ddf2154d5db5ebf7b524986832fc6847f0f43c0f283c6ba6d"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.823937 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.824282 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.827253 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" podStartSLOduration=4.827236846 podStartE2EDuration="4.827236846s" podCreationTimestamp="2026-02-28 13:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:14.825804359 +0000 UTC m=+1369.068125016" watchObservedRunningTime="2026-02-28 13:39:14.827236846 +0000 UTC m=+1369.069557493" Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.835559 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d87b-account-create-update-znc9q" event={"ID":"ff4a0d0d-f8d7-42b2-983a-44af7086a43d","Type":"ContainerStarted","Data":"07b6d7bde6d4545fc01c236995d34782e69cc069d589eb9006014335d8c56bcb"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.841844 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2910518a-9b98-499b-a132-954899d270c0","Type":"ContainerStarted","Data":"3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.844179 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7765f74f9-bjr4m" podStartSLOduration=8.844167591 podStartE2EDuration="8.844167591s" podCreationTimestamp="2026-02-28 13:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:14.840826495 +0000 UTC m=+1369.083147162" watchObservedRunningTime="2026-02-28 13:39:14.844167591 +0000 UTC m=+1369.086488248" Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.862690 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.19706966 podStartE2EDuration="12.862674396s" podCreationTimestamp="2026-02-28 13:39:02 +0000 UTC" firstStartedPulling="2026-02-28 13:39:03.067719659 +0000 UTC m=+1357.310040316" lastFinishedPulling="2026-02-28 13:39:13.733324395 +0000 UTC m=+1367.975645052" observedRunningTime="2026-02-28 13:39:14.862520472 +0000 UTC m=+1369.104841129" watchObservedRunningTime="2026-02-28 13:39:14.862674396 +0000 UTC m=+1369.104995043" Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.868937 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4v8r2" event={"ID":"550d213b-7a35-4053-8364-e78d03f794ca","Type":"ContainerStarted","Data":"eb63a77e85e634d0f90885421e8ef12e03a0c23d35c1ba10768f6c38592cb5c9"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.869169 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4v8r2" event={"ID":"550d213b-7a35-4053-8364-e78d03f794ca","Type":"ContainerStarted","Data":"5b6ef255dd08351452a04c10629f3b9699adaab916d8fb70fbadbc82841e1080"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.883232 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lfprr" event={"ID":"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9","Type":"ContainerStarted","Data":"f11f2a34787ba070f1bbc7a68cbc846b0b21fcafdb03a30a9a418d559afbc523"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.903117 4897 generic.go:334] "Generic (PLEG): container finished" podID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerID="1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338" exitCode=143 Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.903318 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86b347d9-5a82-4e31-9ba3-1e5c82decb50","Type":"ContainerDied","Data":"1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.918111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-297f-account-create-update-2g97b" event={"ID":"50f458e2-efe0-49ba-8fa3-135d3673b9a7","Type":"ContainerStarted","Data":"a2a9ef9cbe7a80a819f302b39a8aefe9057bd8667ac3fc6bfd27b0bb0cf35815"} Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.949212 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-297f-account-create-update-2g97b" podStartSLOduration=4.949194798 podStartE2EDuration="4.949194798s" podCreationTimestamp="2026-02-28 13:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:14.940794133 +0000 UTC m=+1369.183114790" watchObservedRunningTime="2026-02-28 13:39:14.949194798 +0000 UTC m=+1369.191515455" Feb 28 13:39:14 crc kubenswrapper[4897]: I0228 13:39:14.954924 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-4v8r2" podStartSLOduration=4.954916015 podStartE2EDuration="4.954916015s" podCreationTimestamp="2026-02-28 13:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:14.891558718 +0000 UTC m=+1369.133879375" watchObservedRunningTime="2026-02-28 13:39:14.954916015 +0000 UTC m=+1369.197236662" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.814024 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.908829 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-scripts\") pod \"9910b644-86b9-44e7-856e-4fbaf1d1a740\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.909197 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsjr2\" (UniqueName: \"kubernetes.io/projected/9910b644-86b9-44e7-856e-4fbaf1d1a740-kube-api-access-xsjr2\") pod \"9910b644-86b9-44e7-856e-4fbaf1d1a740\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.909227 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-config-data\") pod \"9910b644-86b9-44e7-856e-4fbaf1d1a740\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.909259 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9910b644-86b9-44e7-856e-4fbaf1d1a740\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.909294 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-public-tls-certs\") pod \"9910b644-86b9-44e7-856e-4fbaf1d1a740\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.909328 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-combined-ca-bundle\") pod \"9910b644-86b9-44e7-856e-4fbaf1d1a740\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.909440 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-httpd-run\") pod \"9910b644-86b9-44e7-856e-4fbaf1d1a740\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.909536 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-logs\") pod \"9910b644-86b9-44e7-856e-4fbaf1d1a740\" (UID: \"9910b644-86b9-44e7-856e-4fbaf1d1a740\") " Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.910462 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-logs" (OuterVolumeSpecName: "logs") pod "9910b644-86b9-44e7-856e-4fbaf1d1a740" (UID: "9910b644-86b9-44e7-856e-4fbaf1d1a740"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.910814 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9910b644-86b9-44e7-856e-4fbaf1d1a740" (UID: "9910b644-86b9-44e7-856e-4fbaf1d1a740"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.917968 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-scripts" (OuterVolumeSpecName: "scripts") pod "9910b644-86b9-44e7-856e-4fbaf1d1a740" (UID: "9910b644-86b9-44e7-856e-4fbaf1d1a740"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.919347 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "9910b644-86b9-44e7-856e-4fbaf1d1a740" (UID: "9910b644-86b9-44e7-856e-4fbaf1d1a740"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.929923 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9910b644-86b9-44e7-856e-4fbaf1d1a740-kube-api-access-xsjr2" (OuterVolumeSpecName: "kube-api-access-xsjr2") pod "9910b644-86b9-44e7-856e-4fbaf1d1a740" (UID: "9910b644-86b9-44e7-856e-4fbaf1d1a740"). InnerVolumeSpecName "kube-api-access-xsjr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.949712 4897 generic.go:334] "Generic (PLEG): container finished" podID="8c4a08db-9a69-41de-b77c-5ebeb255cd29" containerID="1e48acf4c11a6fa472f80eba7c2a0adcb921649925da1164b44cf6f65b3216a4" exitCode=0 Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.949804 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f8445" event={"ID":"8c4a08db-9a69-41de-b77c-5ebeb255cd29","Type":"ContainerDied","Data":"1e48acf4c11a6fa472f80eba7c2a0adcb921649925da1164b44cf6f65b3216a4"} Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.960064 4897 generic.go:334] "Generic (PLEG): container finished" podID="3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9" containerID="69662651062e32e83408d45b84ef1fa8d88485b2f7df9019a18bc7b50c700294" exitCode=0 Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.960135 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lfprr" event={"ID":"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9","Type":"ContainerDied","Data":"69662651062e32e83408d45b84ef1fa8d88485b2f7df9019a18bc7b50c700294"} Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.975768 4897 generic.go:334] "Generic (PLEG): container finished" podID="f36fdddb-f718-4f7a-bc78-1a5a543fdefe" containerID="5bb15e400aa68a527c5bf52ef1421e2ffa7853c527c7de11d4cec78fd9e45ec3" exitCode=0 Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.975837 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" event={"ID":"f36fdddb-f718-4f7a-bc78-1a5a543fdefe","Type":"ContainerDied","Data":"5bb15e400aa68a527c5bf52ef1421e2ffa7853c527c7de11d4cec78fd9e45ec3"} Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.988540 4897 generic.go:334] "Generic (PLEG): container finished" podID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerID="d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc" exitCode=0 Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.988601 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9910b644-86b9-44e7-856e-4fbaf1d1a740","Type":"ContainerDied","Data":"d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc"} Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.988624 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9910b644-86b9-44e7-856e-4fbaf1d1a740","Type":"ContainerDied","Data":"76f4b02a367da1f9a2019e7ae27423da042b04a1733e069007549ccc2c3b6001"} Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.988639 4897 scope.go:117] "RemoveContainer" containerID="d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.988730 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:39:15 crc kubenswrapper[4897]: I0228 13:39:15.992783 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9910b644-86b9-44e7-856e-4fbaf1d1a740" (UID: "9910b644-86b9-44e7-856e-4fbaf1d1a740"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.004609 4897 generic.go:334] "Generic (PLEG): container finished" podID="50f458e2-efe0-49ba-8fa3-135d3673b9a7" containerID="1a548167e1018e7b52260eaf95f474b68e5f3775b49c5ca818a40bc79eefc798" exitCode=0 Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.004715 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-297f-account-create-update-2g97b" event={"ID":"50f458e2-efe0-49ba-8fa3-135d3673b9a7","Type":"ContainerDied","Data":"1a548167e1018e7b52260eaf95f474b68e5f3775b49c5ca818a40bc79eefc798"} Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.012005 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.012025 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.012034 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.012045 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9910b644-86b9-44e7-856e-4fbaf1d1a740-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.012053 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.012061 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsjr2\" (UniqueName: \"kubernetes.io/projected/9910b644-86b9-44e7-856e-4fbaf1d1a740-kube-api-access-xsjr2\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.012172 4897 generic.go:334] "Generic (PLEG): container finished" podID="ff4a0d0d-f8d7-42b2-983a-44af7086a43d" containerID="75da0ea2dbd7506580b4f9c5e9a6b64ad7c1ce1a1d240ce592f6071082da0c6b" exitCode=0 Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.012301 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d87b-account-create-update-znc9q" event={"ID":"ff4a0d0d-f8d7-42b2-983a-44af7086a43d","Type":"ContainerDied","Data":"75da0ea2dbd7506580b4f9c5e9a6b64ad7c1ce1a1d240ce592f6071082da0c6b"} Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.028947 4897 generic.go:334] "Generic (PLEG): container finished" podID="550d213b-7a35-4053-8364-e78d03f794ca" containerID="eb63a77e85e634d0f90885421e8ef12e03a0c23d35c1ba10768f6c38592cb5c9" exitCode=0 Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.029543 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4v8r2" event={"ID":"550d213b-7a35-4053-8364-e78d03f794ca","Type":"ContainerDied","Data":"eb63a77e85e634d0f90885421e8ef12e03a0c23d35c1ba10768f6c38592cb5c9"} Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.030867 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9910b644-86b9-44e7-856e-4fbaf1d1a740" (UID: "9910b644-86b9-44e7-856e-4fbaf1d1a740"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.031855 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-config-data" (OuterVolumeSpecName: "config-data") pod "9910b644-86b9-44e7-856e-4fbaf1d1a740" (UID: "9910b644-86b9-44e7-856e-4fbaf1d1a740"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.051098 4897 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.114206 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.114244 4897 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.114253 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9910b644-86b9-44e7-856e-4fbaf1d1a740-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.120135 4897 scope.go:117] "RemoveContainer" containerID="d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.142321 4897 scope.go:117] "RemoveContainer" containerID="d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc" Feb 28 13:39:16 crc kubenswrapper[4897]: E0228 13:39:16.142641 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc\": container with ID starting with d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc not found: ID does not exist" containerID="d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.142672 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc"} err="failed to get container status \"d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc\": rpc error: code = NotFound desc = could not find container \"d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc\": container with ID starting with d73ff1abc0cf68fa36395578360725cbca1bd6ab9153cbbc88ebf47152ff83bc not found: ID does not exist" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.142705 4897 scope.go:117] "RemoveContainer" containerID="d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed" Feb 28 13:39:16 crc kubenswrapper[4897]: E0228 13:39:16.142970 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed\": container with ID starting with d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed not found: ID does not exist" containerID="d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.142991 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed"} err="failed to get container status \"d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed\": rpc error: code = NotFound desc = could not find container \"d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed\": container with ID starting with d9ea3b944a1498d6de5cbdbd7af3ded190197b2fc9767cd936f4808d440900ed not found: ID does not exist" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.333357 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.373969 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.382373 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:39:16 crc kubenswrapper[4897]: E0228 13:39:16.382867 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerName="glance-httpd" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.382880 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerName="glance-httpd" Feb 28 13:39:16 crc kubenswrapper[4897]: E0228 13:39:16.382899 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerName="glance-log" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.382904 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerName="glance-log" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.383085 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerName="glance-log" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.383112 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" containerName="glance-httpd" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.384135 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.388229 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.388351 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.388470 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.428960 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-config-data\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.429003 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.429130 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-scripts\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.429183 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-logs\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.429201 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.429442 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.429590 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.429627 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxzj\" (UniqueName: \"kubernetes.io/projected/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-kube-api-access-kbxzj\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.488217 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9910b644-86b9-44e7-856e-4fbaf1d1a740" path="/var/lib/kubelet/pods/9910b644-86b9-44e7-856e-4fbaf1d1a740/volumes" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.531538 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.531580 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxzj\" (UniqueName: \"kubernetes.io/projected/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-kube-api-access-kbxzj\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.531683 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-config-data\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.531704 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.531734 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-scripts\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.531751 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-logs\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.531765 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.531844 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.532952 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.533207 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-logs\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.535469 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.542480 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-scripts\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.552180 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-config-data\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.567545 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.569369 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxzj\" (UniqueName: \"kubernetes.io/projected/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-kube-api-access-kbxzj\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.577287 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/891bad69-3c9e-4c8a-b5fb-526b4ce79ec5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.593244 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5\") " pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.689688 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.720157 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.838714 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-scripts\") pod \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.839183 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-config-data\") pod \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.839222 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.839245 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjmm8\" (UniqueName: \"kubernetes.io/projected/86b347d9-5a82-4e31-9ba3-1e5c82decb50-kube-api-access-kjmm8\") pod \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.839340 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-logs\") pod \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.839422 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-combined-ca-bundle\") pod \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.839502 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-internal-tls-certs\") pod \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.839619 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-httpd-run\") pod \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\" (UID: \"86b347d9-5a82-4e31-9ba3-1e5c82decb50\") " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.840838 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "86b347d9-5a82-4e31-9ba3-1e5c82decb50" (UID: "86b347d9-5a82-4e31-9ba3-1e5c82decb50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.841232 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-logs" (OuterVolumeSpecName: "logs") pod "86b347d9-5a82-4e31-9ba3-1e5c82decb50" (UID: "86b347d9-5a82-4e31-9ba3-1e5c82decb50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.853021 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-scripts" (OuterVolumeSpecName: "scripts") pod "86b347d9-5a82-4e31-9ba3-1e5c82decb50" (UID: "86b347d9-5a82-4e31-9ba3-1e5c82decb50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.853409 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b347d9-5a82-4e31-9ba3-1e5c82decb50-kube-api-access-kjmm8" (OuterVolumeSpecName: "kube-api-access-kjmm8") pod "86b347d9-5a82-4e31-9ba3-1e5c82decb50" (UID: "86b347d9-5a82-4e31-9ba3-1e5c82decb50"). InnerVolumeSpecName "kube-api-access-kjmm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.854080 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "86b347d9-5a82-4e31-9ba3-1e5c82decb50" (UID: "86b347d9-5a82-4e31-9ba3-1e5c82decb50"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.914613 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86b347d9-5a82-4e31-9ba3-1e5c82decb50" (UID: "86b347d9-5a82-4e31-9ba3-1e5c82decb50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.918335 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-config-data" (OuterVolumeSpecName: "config-data") pod "86b347d9-5a82-4e31-9ba3-1e5c82decb50" (UID: "86b347d9-5a82-4e31-9ba3-1e5c82decb50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.941669 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.941704 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.941713 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.941749 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.941758 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjmm8\" (UniqueName: \"kubernetes.io/projected/86b347d9-5a82-4e31-9ba3-1e5c82decb50-kube-api-access-kjmm8\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.941769 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86b347d9-5a82-4e31-9ba3-1e5c82decb50-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.941776 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.946965 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "86b347d9-5a82-4e31-9ba3-1e5c82decb50" (UID: "86b347d9-5a82-4e31-9ba3-1e5c82decb50"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:16 crc kubenswrapper[4897]: I0228 13:39:16.966487 4897 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.044062 4897 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.044093 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b347d9-5a82-4e31-9ba3-1e5c82decb50-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.048572 4897 generic.go:334] "Generic (PLEG): container finished" podID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerID="899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52" exitCode=0 Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.048709 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.048725 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86b347d9-5a82-4e31-9ba3-1e5c82decb50","Type":"ContainerDied","Data":"899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52"} Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.048859 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86b347d9-5a82-4e31-9ba3-1e5c82decb50","Type":"ContainerDied","Data":"6e7de237873637cac15cbc6809afb0378b361a48df777955256a05be32a9ec9c"} Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.048940 4897 scope.go:117] "RemoveContainer" containerID="899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.185714 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.215394 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.222850 4897 scope.go:117] "RemoveContainer" containerID="1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.224719 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:39:17 crc kubenswrapper[4897]: E0228 13:39:17.225105 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerName="glance-log" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.225120 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerName="glance-log" Feb 28 13:39:17 crc kubenswrapper[4897]: E0228 13:39:17.225144 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerName="glance-httpd" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.225150 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerName="glance-httpd" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.225346 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerName="glance-log" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.225372 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" containerName="glance-httpd" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.226372 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.233185 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.234374 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.235056 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.263563 4897 scope.go:117] "RemoveContainer" containerID="899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52" Feb 28 13:39:17 crc kubenswrapper[4897]: E0228 13:39:17.275035 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52\": container with ID starting with 899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52 not found: ID does not exist" containerID="899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.275090 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52"} err="failed to get container status \"899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52\": rpc error: code = NotFound desc = could not find container \"899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52\": container with ID starting with 899e7d75e84c6c9f8155f00fe9b148671cdfafe1a1d82293214800ecd06e3f52 not found: ID does not exist" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.275114 4897 scope.go:117] "RemoveContainer" containerID="1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338" Feb 28 13:39:17 crc kubenswrapper[4897]: E0228 13:39:17.280349 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338\": container with ID starting with 1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338 not found: ID does not exist" containerID="1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.280379 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338"} err="failed to get container status \"1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338\": rpc error: code = NotFound desc = could not find container \"1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338\": container with ID starting with 1555e57459a95ac8ccff278547b90a551e071926a04b87210994bce003d28338 not found: ID does not exist" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.349455 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.349522 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c9c2403-d54a-4278-b29c-e0533e360579-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.349567 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c9c2403-d54a-4278-b29c-e0533e360579-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.349638 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.349679 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.349708 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.349727 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.349757 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwq59\" (UniqueName: \"kubernetes.io/projected/5c9c2403-d54a-4278-b29c-e0533e360579-kube-api-access-zwq59\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.373083 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.451992 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.452059 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.452093 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.452112 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.452140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwq59\" (UniqueName: \"kubernetes.io/projected/5c9c2403-d54a-4278-b29c-e0533e360579-kube-api-access-zwq59\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.452157 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.452189 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c9c2403-d54a-4278-b29c-e0533e360579-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.452225 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c9c2403-d54a-4278-b29c-e0533e360579-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.452734 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c9c2403-d54a-4278-b29c-e0533e360579-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.458396 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.462200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c9c2403-d54a-4278-b29c-e0533e360579-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.462811 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.465677 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.477957 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.489993 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwq59\" (UniqueName: \"kubernetes.io/projected/5c9c2403-d54a-4278-b29c-e0533e360579-kube-api-access-zwq59\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.496197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c9c2403-d54a-4278-b29c-e0533e360579-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.513782 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c9c2403-d54a-4278-b29c-e0533e360579\") " pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.568005 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.606123 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.779889 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlks6\" (UniqueName: \"kubernetes.io/projected/550d213b-7a35-4053-8364-e78d03f794ca-kube-api-access-dlks6\") pod \"550d213b-7a35-4053-8364-e78d03f794ca\" (UID: \"550d213b-7a35-4053-8364-e78d03f794ca\") " Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.780515 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d213b-7a35-4053-8364-e78d03f794ca-operator-scripts\") pod \"550d213b-7a35-4053-8364-e78d03f794ca\" (UID: \"550d213b-7a35-4053-8364-e78d03f794ca\") " Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.781153 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550d213b-7a35-4053-8364-e78d03f794ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "550d213b-7a35-4053-8364-e78d03f794ca" (UID: "550d213b-7a35-4053-8364-e78d03f794ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.782006 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/550d213b-7a35-4053-8364-e78d03f794ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.786441 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550d213b-7a35-4053-8364-e78d03f794ca-kube-api-access-dlks6" (OuterVolumeSpecName: "kube-api-access-dlks6") pod "550d213b-7a35-4053-8364-e78d03f794ca" (UID: "550d213b-7a35-4053-8364-e78d03f794ca"). InnerVolumeSpecName "kube-api-access-dlks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.802970 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.883803 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlks6\" (UniqueName: \"kubernetes.io/projected/550d213b-7a35-4053-8364-e78d03f794ca-kube-api-access-dlks6\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.919524 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.923229 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.927565 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.946475 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.985393 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl5dh\" (UniqueName: \"kubernetes.io/projected/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-kube-api-access-fl5dh\") pod \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\" (UID: \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\") " Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.985449 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-operator-scripts\") pod \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\" (UID: \"f36fdddb-f718-4f7a-bc78-1a5a543fdefe\") " Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.986182 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f36fdddb-f718-4f7a-bc78-1a5a543fdefe" (UID: "f36fdddb-f718-4f7a-bc78-1a5a543fdefe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:39:17 crc kubenswrapper[4897]: I0228 13:39:17.992533 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-kube-api-access-fl5dh" (OuterVolumeSpecName: "kube-api-access-fl5dh") pod "f36fdddb-f718-4f7a-bc78-1a5a543fdefe" (UID: "f36fdddb-f718-4f7a-bc78-1a5a543fdefe"). InnerVolumeSpecName "kube-api-access-fl5dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.087812 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-operator-scripts\") pod \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\" (UID: \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.087939 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f458e2-efe0-49ba-8fa3-135d3673b9a7-operator-scripts\") pod \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\" (UID: \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.088041 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqlp5\" (UniqueName: \"kubernetes.io/projected/50f458e2-efe0-49ba-8fa3-135d3673b9a7-kube-api-access-sqlp5\") pod \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\" (UID: \"50f458e2-efe0-49ba-8fa3-135d3673b9a7\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.088075 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjb8j\" (UniqueName: \"kubernetes.io/projected/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-kube-api-access-jjb8j\") pod \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\" (UID: \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.088094 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4a08db-9a69-41de-b77c-5ebeb255cd29-operator-scripts\") pod \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\" (UID: \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.088141 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-operator-scripts\") pod \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\" (UID: \"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.088252 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtw2h\" (UniqueName: \"kubernetes.io/projected/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-kube-api-access-vtw2h\") pod \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\" (UID: \"ff4a0d0d-f8d7-42b2-983a-44af7086a43d\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.088438 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2cnx\" (UniqueName: \"kubernetes.io/projected/8c4a08db-9a69-41de-b77c-5ebeb255cd29-kube-api-access-x2cnx\") pod \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\" (UID: \"8c4a08db-9a69-41de-b77c-5ebeb255cd29\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.088961 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.088981 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl5dh\" (UniqueName: \"kubernetes.io/projected/f36fdddb-f718-4f7a-bc78-1a5a543fdefe-kube-api-access-fl5dh\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.091150 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9" (UID: "3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.091647 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f458e2-efe0-49ba-8fa3-135d3673b9a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50f458e2-efe0-49ba-8fa3-135d3673b9a7" (UID: "50f458e2-efe0-49ba-8fa3-135d3673b9a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.091683 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff4a0d0d-f8d7-42b2-983a-44af7086a43d" (UID: "ff4a0d0d-f8d7-42b2-983a-44af7086a43d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.095688 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c4a08db-9a69-41de-b77c-5ebeb255cd29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c4a08db-9a69-41de-b77c-5ebeb255cd29" (UID: "8c4a08db-9a69-41de-b77c-5ebeb255cd29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.096145 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-kube-api-access-vtw2h" (OuterVolumeSpecName: "kube-api-access-vtw2h") pod "ff4a0d0d-f8d7-42b2-983a-44af7086a43d" (UID: "ff4a0d0d-f8d7-42b2-983a-44af7086a43d"). InnerVolumeSpecName "kube-api-access-vtw2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.101041 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4a08db-9a69-41de-b77c-5ebeb255cd29-kube-api-access-x2cnx" (OuterVolumeSpecName: "kube-api-access-x2cnx") pod "8c4a08db-9a69-41de-b77c-5ebeb255cd29" (UID: "8c4a08db-9a69-41de-b77c-5ebeb255cd29"). InnerVolumeSpecName "kube-api-access-x2cnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.107517 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f458e2-efe0-49ba-8fa3-135d3673b9a7-kube-api-access-sqlp5" (OuterVolumeSpecName: "kube-api-access-sqlp5") pod "50f458e2-efe0-49ba-8fa3-135d3673b9a7" (UID: "50f458e2-efe0-49ba-8fa3-135d3673b9a7"). InnerVolumeSpecName "kube-api-access-sqlp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.117526 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4v8r2" event={"ID":"550d213b-7a35-4053-8364-e78d03f794ca","Type":"ContainerDied","Data":"5b6ef255dd08351452a04c10629f3b9699adaab916d8fb70fbadbc82841e1080"} Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.117596 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6ef255dd08351452a04c10629f3b9699adaab916d8fb70fbadbc82841e1080" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.117712 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4v8r2" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.116282 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-kube-api-access-jjb8j" (OuterVolumeSpecName: "kube-api-access-jjb8j") pod "3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9" (UID: "3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9"). InnerVolumeSpecName "kube-api-access-jjb8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.126411 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lfprr" event={"ID":"3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9","Type":"ContainerDied","Data":"f11f2a34787ba070f1bbc7a68cbc846b0b21fcafdb03a30a9a418d559afbc523"} Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.126457 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f11f2a34787ba070f1bbc7a68cbc846b0b21fcafdb03a30a9a418d559afbc523" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.126536 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lfprr" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.150260 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d87b-account-create-update-znc9q" event={"ID":"ff4a0d0d-f8d7-42b2-983a-44af7086a43d","Type":"ContainerDied","Data":"07b6d7bde6d4545fc01c236995d34782e69cc069d589eb9006014335d8c56bcb"} Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.150331 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07b6d7bde6d4545fc01c236995d34782e69cc069d589eb9006014335d8c56bcb" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.150453 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d87b-account-create-update-znc9q" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.171819 4897 generic.go:334] "Generic (PLEG): container finished" podID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerID="ac909c528d6a37a6661858d1af31fcfccc18f72e2110aae3aef663642f29b3a7" exitCode=137 Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.171873 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb67c45d-s75qr" event={"ID":"6102738c-6c77-48c6-87e1-67853cf8ce43","Type":"ContainerDied","Data":"ac909c528d6a37a6661858d1af31fcfccc18f72e2110aae3aef663642f29b3a7"} Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.175139 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" event={"ID":"f36fdddb-f718-4f7a-bc78-1a5a543fdefe","Type":"ContainerDied","Data":"ab95809703dfd50dbb27111900edb93801af294a0019968fe27aaa20816118fd"} Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.175165 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab95809703dfd50dbb27111900edb93801af294a0019968fe27aaa20816118fd" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.175228 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c071-account-create-update-4cxfk" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.180511 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5","Type":"ContainerStarted","Data":"af3bc581eac58d64016a0d23f382c76f1b07eb8c63ab0443feb940fff7920203"} Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.183856 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-297f-account-create-update-2g97b" event={"ID":"50f458e2-efe0-49ba-8fa3-135d3673b9a7","Type":"ContainerDied","Data":"a2a9ef9cbe7a80a819f302b39a8aefe9057bd8667ac3fc6bfd27b0bb0cf35815"} Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.183885 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a9ef9cbe7a80a819f302b39a8aefe9057bd8667ac3fc6bfd27b0bb0cf35815" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.183947 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-297f-account-create-update-2g97b" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.187633 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f8445" event={"ID":"8c4a08db-9a69-41de-b77c-5ebeb255cd29","Type":"ContainerDied","Data":"5b356112ada1a981ffb82d6d328980f673add9f72bb47ebbbdacb4f855293980"} Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.187673 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b356112ada1a981ffb82d6d328980f673add9f72bb47ebbbdacb4f855293980" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.187761 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f8445" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.191049 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2cnx\" (UniqueName: \"kubernetes.io/projected/8c4a08db-9a69-41de-b77c-5ebeb255cd29-kube-api-access-x2cnx\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.191078 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.191089 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f458e2-efe0-49ba-8fa3-135d3673b9a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.191098 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqlp5\" (UniqueName: \"kubernetes.io/projected/50f458e2-efe0-49ba-8fa3-135d3673b9a7-kube-api-access-sqlp5\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.191107 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjb8j\" (UniqueName: \"kubernetes.io/projected/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-kube-api-access-jjb8j\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.191115 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4a08db-9a69-41de-b77c-5ebeb255cd29-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.191124 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.191132 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtw2h\" (UniqueName: \"kubernetes.io/projected/ff4a0d0d-f8d7-42b2-983a-44af7086a43d-kube-api-access-vtw2h\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.340201 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.471528 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b347d9-5a82-4e31-9ba3-1e5c82decb50" path="/var/lib/kubelet/pods/86b347d9-5a82-4e31-9ba3-1e5c82decb50/volumes" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.481890 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.603610 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq2dn\" (UniqueName: \"kubernetes.io/projected/6102738c-6c77-48c6-87e1-67853cf8ce43-kube-api-access-kq2dn\") pod \"6102738c-6c77-48c6-87e1-67853cf8ce43\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.603695 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-config-data\") pod \"6102738c-6c77-48c6-87e1-67853cf8ce43\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.603812 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-combined-ca-bundle\") pod \"6102738c-6c77-48c6-87e1-67853cf8ce43\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.604051 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-secret-key\") pod \"6102738c-6c77-48c6-87e1-67853cf8ce43\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.604911 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6102738c-6c77-48c6-87e1-67853cf8ce43-logs\") pod \"6102738c-6c77-48c6-87e1-67853cf8ce43\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.604980 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-tls-certs\") pod \"6102738c-6c77-48c6-87e1-67853cf8ce43\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.605355 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-scripts\") pod \"6102738c-6c77-48c6-87e1-67853cf8ce43\" (UID: \"6102738c-6c77-48c6-87e1-67853cf8ce43\") " Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.605529 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6102738c-6c77-48c6-87e1-67853cf8ce43-logs" (OuterVolumeSpecName: "logs") pod "6102738c-6c77-48c6-87e1-67853cf8ce43" (UID: "6102738c-6c77-48c6-87e1-67853cf8ce43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.606753 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6102738c-6c77-48c6-87e1-67853cf8ce43-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.608081 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6102738c-6c77-48c6-87e1-67853cf8ce43" (UID: "6102738c-6c77-48c6-87e1-67853cf8ce43"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.609788 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6102738c-6c77-48c6-87e1-67853cf8ce43-kube-api-access-kq2dn" (OuterVolumeSpecName: "kube-api-access-kq2dn") pod "6102738c-6c77-48c6-87e1-67853cf8ce43" (UID: "6102738c-6c77-48c6-87e1-67853cf8ce43"). InnerVolumeSpecName "kube-api-access-kq2dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.654824 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6102738c-6c77-48c6-87e1-67853cf8ce43" (UID: "6102738c-6c77-48c6-87e1-67853cf8ce43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.655159 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-scripts" (OuterVolumeSpecName: "scripts") pod "6102738c-6c77-48c6-87e1-67853cf8ce43" (UID: "6102738c-6c77-48c6-87e1-67853cf8ce43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.655954 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-config-data" (OuterVolumeSpecName: "config-data") pod "6102738c-6c77-48c6-87e1-67853cf8ce43" (UID: "6102738c-6c77-48c6-87e1-67853cf8ce43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.709274 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq2dn\" (UniqueName: \"kubernetes.io/projected/6102738c-6c77-48c6-87e1-67853cf8ce43-kube-api-access-kq2dn\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.709341 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.709356 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.709367 4897 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.709378 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6102738c-6c77-48c6-87e1-67853cf8ce43-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.714665 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6102738c-6c77-48c6-87e1-67853cf8ce43" (UID: "6102738c-6c77-48c6-87e1-67853cf8ce43"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.810843 4897 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6102738c-6c77-48c6-87e1-67853cf8ce43-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:18 crc kubenswrapper[4897]: I0228 13:39:18.988746 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59b7cd74f9-xphhh" Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.092715 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-844df98d6-6ncv9"] Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.093073 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-844df98d6-6ncv9" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerName="neutron-api" containerID="cri-o://c04e0114733fac68182304cf39bae4de83471321b325d05b9b29415deac0c99a" gracePeriod=30 Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.093494 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-844df98d6-6ncv9" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerName="neutron-httpd" containerID="cri-o://0dc24af0caa37581cc0a0f08f82404e8d8c243b3501baf511950cf7edd705dba" gracePeriod=30 Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.206841 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb67c45d-s75qr" event={"ID":"6102738c-6c77-48c6-87e1-67853cf8ce43","Type":"ContainerDied","Data":"e9366943af5777da68ff93613c13d1749e4fbc9d264075295e943ba0f739600f"} Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.207085 4897 scope.go:117] "RemoveContainer" containerID="2fc3fb7a660268704953fa4bd24b93db1256492df8f5818ec8132b76f2ceb191" Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.207226 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb67c45d-s75qr" Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.227179 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5","Type":"ContainerStarted","Data":"023674f1c869f0b1394a6ab477e5acd8ac9a5425b55d00973c9f5cdb29e8a383"} Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.239457 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9c2403-d54a-4278-b29c-e0533e360579","Type":"ContainerStarted","Data":"84083edc043db57b8ca4910368b538b1e286cb91d68ee1ab757bf3f8ec16197e"} Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.329471 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fb67c45d-s75qr"] Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.353133 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6fb67c45d-s75qr"] Feb 28 13:39:19 crc kubenswrapper[4897]: I0228 13:39:19.469454 4897 scope.go:117] "RemoveContainer" containerID="ac909c528d6a37a6661858d1af31fcfccc18f72e2110aae3aef663642f29b3a7" Feb 28 13:39:20 crc kubenswrapper[4897]: I0228 13:39:20.251501 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9c2403-d54a-4278-b29c-e0533e360579","Type":"ContainerStarted","Data":"6bc859e293da98de9622694d5a19a22b36eab6419747d064ad60753c117c4e04"} Feb 28 13:39:20 crc kubenswrapper[4897]: I0228 13:39:20.252293 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c9c2403-d54a-4278-b29c-e0533e360579","Type":"ContainerStarted","Data":"75f89f7db6a4135cbbfe7eeec5fd47a6c8ead74d631c5a7aedc060d0cd4142a2"} Feb 28 13:39:20 crc kubenswrapper[4897]: I0228 13:39:20.255645 4897 generic.go:334] "Generic (PLEG): container finished" podID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerID="0dc24af0caa37581cc0a0f08f82404e8d8c243b3501baf511950cf7edd705dba" exitCode=0 Feb 28 13:39:20 crc kubenswrapper[4897]: I0228 13:39:20.255707 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844df98d6-6ncv9" event={"ID":"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18","Type":"ContainerDied","Data":"0dc24af0caa37581cc0a0f08f82404e8d8c243b3501baf511950cf7edd705dba"} Feb 28 13:39:20 crc kubenswrapper[4897]: I0228 13:39:20.262493 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"891bad69-3c9e-4c8a-b5fb-526b4ce79ec5","Type":"ContainerStarted","Data":"dc5deeb163cf42f60ff14888970ce6065d529c1a6874e3b6dca45beb85c78747"} Feb 28 13:39:20 crc kubenswrapper[4897]: I0228 13:39:20.273054 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.273035126 podStartE2EDuration="3.273035126s" podCreationTimestamp="2026-02-28 13:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:20.270195543 +0000 UTC m=+1374.512516200" watchObservedRunningTime="2026-02-28 13:39:20.273035126 +0000 UTC m=+1374.515355783" Feb 28 13:39:20 crc kubenswrapper[4897]: I0228 13:39:20.303345 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.303327704 podStartE2EDuration="4.303327704s" podCreationTimestamp="2026-02-28 13:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:20.29421718 +0000 UTC m=+1374.536537837" watchObservedRunningTime="2026-02-28 13:39:20.303327704 +0000 UTC m=+1374.545648361" Feb 28 13:39:20 crc kubenswrapper[4897]: I0228 13:39:20.467713 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" path="/var/lib/kubelet/pods/6102738c-6c77-48c6-87e1-67853cf8ce43/volumes" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.102613 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mmkk5"] Feb 28 13:39:21 crc kubenswrapper[4897]: E0228 13:39:21.103604 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.103625 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" Feb 28 13:39:21 crc kubenswrapper[4897]: E0228 13:39:21.103639 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36fdddb-f718-4f7a-bc78-1a5a543fdefe" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.103650 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36fdddb-f718-4f7a-bc78-1a5a543fdefe" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: E0228 13:39:21.103675 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4a08db-9a69-41de-b77c-5ebeb255cd29" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.103682 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4a08db-9a69-41de-b77c-5ebeb255cd29" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: E0228 13:39:21.103693 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4a0d0d-f8d7-42b2-983a-44af7086a43d" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.103699 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4a0d0d-f8d7-42b2-983a-44af7086a43d" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: E0228 13:39:21.103714 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550d213b-7a35-4053-8364-e78d03f794ca" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.103721 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="550d213b-7a35-4053-8364-e78d03f794ca" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: E0228 13:39:21.103731 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f458e2-efe0-49ba-8fa3-135d3673b9a7" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.103738 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f458e2-efe0-49ba-8fa3-135d3673b9a7" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: E0228 13:39:21.103763 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.103770 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: E0228 13:39:21.103781 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon-log" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.103788 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon-log" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104024 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104043 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="550d213b-7a35-4053-8364-e78d03f794ca" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104065 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104074 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36fdddb-f718-4f7a-bc78-1a5a543fdefe" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104085 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4a08db-9a69-41de-b77c-5ebeb255cd29" containerName="mariadb-database-create" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104098 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4a0d0d-f8d7-42b2-983a-44af7086a43d" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104113 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f458e2-efe0-49ba-8fa3-135d3673b9a7" containerName="mariadb-account-create-update" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104130 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6102738c-6c77-48c6-87e1-67853cf8ce43" containerName="horizon-log" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.104950 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.107403 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.108015 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.108168 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jf7vb" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.125583 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mmkk5"] Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.271179 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-scripts\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.271333 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-config-data\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.271527 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpsl\" (UniqueName: \"kubernetes.io/projected/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-kube-api-access-4gpsl\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.271588 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.285224 4897 generic.go:334] "Generic (PLEG): container finished" podID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerID="c04e0114733fac68182304cf39bae4de83471321b325d05b9b29415deac0c99a" exitCode=0 Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.285334 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844df98d6-6ncv9" event={"ID":"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18","Type":"ContainerDied","Data":"c04e0114733fac68182304cf39bae4de83471321b325d05b9b29415deac0c99a"} Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.373160 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-scripts\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.373205 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-config-data\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.373245 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gpsl\" (UniqueName: \"kubernetes.io/projected/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-kube-api-access-4gpsl\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.373266 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.385336 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-scripts\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.400926 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.402055 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-config-data\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.402402 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gpsl\" (UniqueName: \"kubernetes.io/projected/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-kube-api-access-4gpsl\") pod \"nova-cell0-conductor-db-sync-mmkk5\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:21 crc kubenswrapper[4897]: I0228 13:39:21.428982 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:22 crc kubenswrapper[4897]: I0228 13:39:22.146921 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:22 crc kubenswrapper[4897]: I0228 13:39:22.152847 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7765f74f9-bjr4m" Feb 28 13:39:22 crc kubenswrapper[4897]: I0228 13:39:22.460027 4897 scope.go:117] "RemoveContainer" containerID="02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530" Feb 28 13:39:24 crc kubenswrapper[4897]: E0228 13:39:24.462993 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.529091 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.665803 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-ovndb-tls-certs\") pod \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.666111 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-httpd-config\") pod \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.666200 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tpsg\" (UniqueName: \"kubernetes.io/projected/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-kube-api-access-8tpsg\") pod \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.666261 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-config\") pod \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.666439 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-combined-ca-bundle\") pod \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\" (UID: \"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18\") " Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.671637 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" (UID: "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.681515 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-kube-api-access-8tpsg" (OuterVolumeSpecName: "kube-api-access-8tpsg") pod "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" (UID: "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18"). InnerVolumeSpecName "kube-api-access-8tpsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.768964 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.768995 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tpsg\" (UniqueName: \"kubernetes.io/projected/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-kube-api-access-8tpsg\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.795502 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-config" (OuterVolumeSpecName: "config") pod "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" (UID: "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.808705 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" (UID: "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.816479 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" (UID: "c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.870756 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.870789 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:24 crc kubenswrapper[4897]: I0228 13:39:24.870800 4897 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:25 crc kubenswrapper[4897]: W0228 13:39:25.025300 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07b3f8ac_cae7_400a_bdf5_5768e5b74f79.slice/crio-745ee297748db77c3a46a3194fd7e241d1422fff5f70d607c3987da4c734901c WatchSource:0}: Error finding container 745ee297748db77c3a46a3194fd7e241d1422fff5f70d607c3987da4c734901c: Status 404 returned error can't find the container with id 745ee297748db77c3a46a3194fd7e241d1422fff5f70d607c3987da4c734901c Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.026769 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mmkk5"] Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.326663 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerStarted","Data":"a84dbcf65ebf45a8e0a4cbb472d0d5147e3deb6bb67a494f9bf8476492e208d2"} Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.329295 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerStarted","Data":"aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee"} Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.332681 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844df98d6-6ncv9" event={"ID":"c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18","Type":"ContainerDied","Data":"a3070bc88395caf61e55cc0b48b8ca2fc46355dac9c7552d210e244136fb0270"} Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.332718 4897 scope.go:117] "RemoveContainer" containerID="0dc24af0caa37581cc0a0f08f82404e8d8c243b3501baf511950cf7edd705dba" Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.332816 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-844df98d6-6ncv9" Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.339702 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" event={"ID":"07b3f8ac-cae7-400a-bdf5-5768e5b74f79","Type":"ContainerStarted","Data":"745ee297748db77c3a46a3194fd7e241d1422fff5f70d607c3987da4c734901c"} Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.373535 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-844df98d6-6ncv9"] Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.381079 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-844df98d6-6ncv9"] Feb 28 13:39:25 crc kubenswrapper[4897]: I0228 13:39:25.404566 4897 scope.go:117] "RemoveContainer" containerID="c04e0114733fac68182304cf39bae4de83471321b325d05b9b29415deac0c99a" Feb 28 13:39:25 crc kubenswrapper[4897]: E0228 13:39:25.549705 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 28 13:39:25 crc kubenswrapper[4897]: E0228 13:39:25.550328 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:39:25 crc kubenswrapper[4897]: E0228 13:39:25.552788 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:39:26 crc kubenswrapper[4897]: E0228 13:39:26.357351 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:39:26 crc kubenswrapper[4897]: I0228 13:39:26.476189 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" path="/var/lib/kubelet/pods/c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18/volumes" Feb 28 13:39:26 crc kubenswrapper[4897]: I0228 13:39:26.721596 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 13:39:26 crc kubenswrapper[4897]: I0228 13:39:26.721639 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 13:39:26 crc kubenswrapper[4897]: I0228 13:39:26.752033 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 13:39:26 crc kubenswrapper[4897]: I0228 13:39:26.785181 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 13:39:27 crc kubenswrapper[4897]: I0228 13:39:27.363660 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 13:39:27 crc kubenswrapper[4897]: I0228 13:39:27.363699 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 13:39:27 crc kubenswrapper[4897]: I0228 13:39:27.569036 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:27 crc kubenswrapper[4897]: I0228 13:39:27.569299 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:27 crc kubenswrapper[4897]: I0228 13:39:27.599978 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:27 crc kubenswrapper[4897]: I0228 13:39:27.611264 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:28 crc kubenswrapper[4897]: I0228 13:39:28.373434 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:28 crc kubenswrapper[4897]: I0228 13:39:28.373488 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:29 crc kubenswrapper[4897]: I0228 13:39:29.227764 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 28 13:39:29 crc kubenswrapper[4897]: I0228 13:39:29.231766 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 13:39:29 crc kubenswrapper[4897]: E0228 13:39:29.232640 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:39:29 crc kubenswrapper[4897]: I0228 13:39:29.233640 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 13:39:29 crc kubenswrapper[4897]: I0228 13:39:29.508481 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:29 crc kubenswrapper[4897]: I0228 13:39:29.509435 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:29 crc kubenswrapper[4897]: I0228 13:39:29.540537 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:30 crc kubenswrapper[4897]: I0228 13:39:30.428517 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:30 crc kubenswrapper[4897]: I0228 13:39:30.830378 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:30 crc kubenswrapper[4897]: I0228 13:39:30.830479 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:39:30 crc kubenswrapper[4897]: I0228 13:39:30.836002 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 13:39:30 crc kubenswrapper[4897]: E0228 13:39:30.947353 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:39:30 crc kubenswrapper[4897]: E0228 13:39:30.947531 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5sms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f05b9cde-39ac-43bf-aff2-85f5b1d2acae): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:39:30 crc kubenswrapper[4897]: E0228 13:39:30.948693 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" Feb 28 13:39:31 crc kubenswrapper[4897]: I0228 13:39:31.414723 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="ceilometer-central-agent" containerID="cri-o://ca52dbc4d3af48283acf71aae46b20c8f9521e1b6d9a67d2467df347da905fe5" gracePeriod=30 Feb 28 13:39:31 crc kubenswrapper[4897]: I0228 13:39:31.414763 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="sg-core" containerID="cri-o://144ce0b87ac18e9815347f1a56e6a1a6695674a8f4f1e1c1de43c5bab1636154" gracePeriod=30 Feb 28 13:39:31 crc kubenswrapper[4897]: I0228 13:39:31.414895 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="ceilometer-notification-agent" containerID="cri-o://250c6a28fc56f09d45c62ecf9b6c012dd971f01783deaa5f8717a511305060e0" gracePeriod=30 Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.329844 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.330351 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="2ff09d8c-69de-4c11-8e94-90fce8f42387" containerName="watcher-applier" containerID="cri-o://7904a3f40492adfac8f92568a394efa044ff0272bcfa724e20ae0aa6404e1333" gracePeriod=30 Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.342746 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.374758 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.374981 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerName="watcher-api-log" containerID="cri-o://1c377f01c0e08e63f24dd7d5fda5daadcf629bd0a0d7ee79b18080e68d14d1c3" gracePeriod=30 Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.375373 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerName="watcher-api" containerID="cri-o://f47628682bffb126e28a653d3e8a5a4058b2a066ffc641faaa527f77f50962a6" gracePeriod=30 Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.437187 4897 generic.go:334] "Generic (PLEG): container finished" podID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerID="144ce0b87ac18e9815347f1a56e6a1a6695674a8f4f1e1c1de43c5bab1636154" exitCode=2 Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.437218 4897 generic.go:334] "Generic (PLEG): container finished" podID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerID="ca52dbc4d3af48283acf71aae46b20c8f9521e1b6d9a67d2467df347da905fe5" exitCode=0 Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.437803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05b9cde-39ac-43bf-aff2-85f5b1d2acae","Type":"ContainerDied","Data":"144ce0b87ac18e9815347f1a56e6a1a6695674a8f4f1e1c1de43c5bab1636154"} Feb 28 13:39:32 crc kubenswrapper[4897]: I0228 13:39:32.437852 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05b9cde-39ac-43bf-aff2-85f5b1d2acae","Type":"ContainerDied","Data":"ca52dbc4d3af48283acf71aae46b20c8f9521e1b6d9a67d2467df347da905fe5"} Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.371352 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.371635 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.371681 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.372410 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2251fe7bbe6b22484b56b41016e482aae198972b32b2a8de419f213131379efa"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.372463 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://2251fe7bbe6b22484b56b41016e482aae198972b32b2a8de419f213131379efa" gracePeriod=600 Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.452579 4897 generic.go:334] "Generic (PLEG): container finished" podID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerID="250c6a28fc56f09d45c62ecf9b6c012dd971f01783deaa5f8717a511305060e0" exitCode=0 Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.452657 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05b9cde-39ac-43bf-aff2-85f5b1d2acae","Type":"ContainerDied","Data":"250c6a28fc56f09d45c62ecf9b6c012dd971f01783deaa5f8717a511305060e0"} Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.457708 4897 generic.go:334] "Generic (PLEG): container finished" podID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerID="f47628682bffb126e28a653d3e8a5a4058b2a066ffc641faaa527f77f50962a6" exitCode=0 Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.457756 4897 generic.go:334] "Generic (PLEG): container finished" podID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerID="1c377f01c0e08e63f24dd7d5fda5daadcf629bd0a0d7ee79b18080e68d14d1c3" exitCode=143 Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.457779 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4814ed83-bcac-465c-aaf6-b2acde9b0e13","Type":"ContainerDied","Data":"f47628682bffb126e28a653d3e8a5a4058b2a066ffc641faaa527f77f50962a6"} Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.457915 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" containerID="cri-o://aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee" gracePeriod=30 Feb 28 13:39:33 crc kubenswrapper[4897]: I0228 13:39:33.457872 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4814ed83-bcac-465c-aaf6-b2acde9b0e13","Type":"ContainerDied","Data":"1c377f01c0e08e63f24dd7d5fda5daadcf629bd0a0d7ee79b18080e68d14d1c3"} Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.228869 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.231323 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.232991 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.401098 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.479590 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="2251fe7bbe6b22484b56b41016e482aae198972b32b2a8de419f213131379efa" exitCode=0 Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.482390 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.483871 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.488519 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx7xs\" (UniqueName: \"kubernetes.io/projected/4814ed83-bcac-465c-aaf6-b2acde9b0e13-kube-api-access-gx7xs\") pod \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.488575 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4814ed83-bcac-465c-aaf6-b2acde9b0e13-logs\") pod \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.488603 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-public-tls-certs\") pod \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.488682 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-combined-ca-bundle\") pod \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.488701 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-config-data\") pod \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.488727 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-internal-tls-certs\") pod \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\" (UID: \"4814ed83-bcac-465c-aaf6-b2acde9b0e13\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.494738 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"2251fe7bbe6b22484b56b41016e482aae198972b32b2a8de419f213131379efa"} Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.494781 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4814ed83-bcac-465c-aaf6-b2acde9b0e13","Type":"ContainerDied","Data":"e5ce18eecd7697eb9fd44fa34eec94c8044a9f3ee062707788ac0005908aa546"} Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.495352 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.495387 4897 scope.go:117] "RemoveContainer" containerID="9c1430618bfc0c64d7fc6435ca448e45cbed910b3af28fa0f1da0886835a239f" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.498266 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4814ed83-bcac-465c-aaf6-b2acde9b0e13-logs" (OuterVolumeSpecName: "logs") pod "4814ed83-bcac-465c-aaf6-b2acde9b0e13" (UID: "4814ed83-bcac-465c-aaf6-b2acde9b0e13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.511637 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4814ed83-bcac-465c-aaf6-b2acde9b0e13-kube-api-access-gx7xs" (OuterVolumeSpecName: "kube-api-access-gx7xs") pod "4814ed83-bcac-465c-aaf6-b2acde9b0e13" (UID: "4814ed83-bcac-465c-aaf6-b2acde9b0e13"). InnerVolumeSpecName "kube-api-access-gx7xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.557947 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4814ed83-bcac-465c-aaf6-b2acde9b0e13" (UID: "4814ed83-bcac-465c-aaf6-b2acde9b0e13"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.560568 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4814ed83-bcac-465c-aaf6-b2acde9b0e13" (UID: "4814ed83-bcac-465c-aaf6-b2acde9b0e13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.567722 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4814ed83-bcac-465c-aaf6-b2acde9b0e13" (UID: "4814ed83-bcac-465c-aaf6-b2acde9b0e13"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.590592 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx7xs\" (UniqueName: \"kubernetes.io/projected/4814ed83-bcac-465c-aaf6-b2acde9b0e13-kube-api-access-gx7xs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.590626 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4814ed83-bcac-465c-aaf6-b2acde9b0e13-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.590638 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.590650 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.590662 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.591322 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-config-data" (OuterVolumeSpecName: "config-data") pod "4814ed83-bcac-465c-aaf6-b2acde9b0e13" (UID: "4814ed83-bcac-465c-aaf6-b2acde9b0e13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.635487 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7904a3f40492adfac8f92568a394efa044ff0272bcfa724e20ae0aa6404e1333" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.636929 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7904a3f40492adfac8f92568a394efa044ff0272bcfa724e20ae0aa6404e1333" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.644866 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7904a3f40492adfac8f92568a394efa044ff0272bcfa724e20ae0aa6404e1333" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.644954 4897 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="2ff09d8c-69de-4c11-8e94-90fce8f42387" containerName="watcher-applier" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.689278 4897 scope.go:117] "RemoveContainer" containerID="f47628682bffb126e28a653d3e8a5a4058b2a066ffc641faaa527f77f50962a6" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.691992 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4814ed83-bcac-465c-aaf6-b2acde9b0e13-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.699670 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.725732 4897 scope.go:117] "RemoveContainer" containerID="1c377f01c0e08e63f24dd7d5fda5daadcf629bd0a0d7ee79b18080e68d14d1c3" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793045 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-sg-core-conf-yaml\") pod \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793142 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-log-httpd\") pod \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793161 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-run-httpd\") pod \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793221 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-combined-ca-bundle\") pod \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793278 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-config-data\") pod \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793365 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-scripts\") pod \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793449 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5sms\" (UniqueName: \"kubernetes.io/projected/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-kube-api-access-r5sms\") pod \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\" (UID: \"f05b9cde-39ac-43bf-aff2-85f5b1d2acae\") " Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793747 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f05b9cde-39ac-43bf-aff2-85f5b1d2acae" (UID: "f05b9cde-39ac-43bf-aff2-85f5b1d2acae"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.793989 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f05b9cde-39ac-43bf-aff2-85f5b1d2acae" (UID: "f05b9cde-39ac-43bf-aff2-85f5b1d2acae"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.794462 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.794479 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.797398 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-scripts" (OuterVolumeSpecName: "scripts") pod "f05b9cde-39ac-43bf-aff2-85f5b1d2acae" (UID: "f05b9cde-39ac-43bf-aff2-85f5b1d2acae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.828019 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f05b9cde-39ac-43bf-aff2-85f5b1d2acae" (UID: "f05b9cde-39ac-43bf-aff2-85f5b1d2acae"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.836575 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-kube-api-access-r5sms" (OuterVolumeSpecName: "kube-api-access-r5sms") pod "f05b9cde-39ac-43bf-aff2-85f5b1d2acae" (UID: "f05b9cde-39ac-43bf-aff2-85f5b1d2acae"). InnerVolumeSpecName "kube-api-access-r5sms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.844478 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.855368 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f05b9cde-39ac-43bf-aff2-85f5b1d2acae" (UID: "f05b9cde-39ac-43bf-aff2-85f5b1d2acae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.861594 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-config-data" (OuterVolumeSpecName: "config-data") pod "f05b9cde-39ac-43bf-aff2-85f5b1d2acae" (UID: "f05b9cde-39ac-43bf-aff2-85f5b1d2acae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.867393 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.880395 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.880844 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerName="watcher-api" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.880860 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerName="watcher-api" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.880872 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="ceilometer-central-agent" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.880879 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="ceilometer-central-agent" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.880893 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="sg-core" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.880899 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="sg-core" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.880911 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerName="neutron-httpd" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.880916 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerName="neutron-httpd" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.880927 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="ceilometer-notification-agent" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.880933 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="ceilometer-notification-agent" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.880950 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerName="watcher-api-log" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.880956 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerName="watcher-api-log" Feb 28 13:39:34 crc kubenswrapper[4897]: E0228 13:39:34.880968 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerName="neutron-api" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.880973 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerName="neutron-api" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.881139 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="sg-core" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.881154 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="ceilometer-notification-agent" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.881168 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerName="neutron-httpd" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.881176 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" containerName="ceilometer-central-agent" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.881193 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e1f0a6-d3e4-4985-8c2e-43f2bba58d18" containerName="neutron-api" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.881200 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerName="watcher-api-log" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.881213 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" containerName="watcher-api" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.882191 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.887282 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.887636 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.887291 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.918927 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.920459 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.920496 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.920507 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.920516 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:34 crc kubenswrapper[4897]: I0228 13:39:34.920526 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5sms\" (UniqueName: \"kubernetes.io/projected/f05b9cde-39ac-43bf-aff2-85f5b1d2acae-kube-api-access-r5sms\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.021585 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-config-data\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.021634 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-public-tls-certs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.021667 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.021714 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.021766 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/759c2685-a508-4824-9e22-1c18ca2e75ca-logs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.021792 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pchj6\" (UniqueName: \"kubernetes.io/projected/759c2685-a508-4824-9e22-1c18ca2e75ca-kube-api-access-pchj6\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.123591 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-config-data\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.123682 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-public-tls-certs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.123760 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.123853 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.123928 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/759c2685-a508-4824-9e22-1c18ca2e75ca-logs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.123981 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pchj6\" (UniqueName: \"kubernetes.io/projected/759c2685-a508-4824-9e22-1c18ca2e75ca-kube-api-access-pchj6\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.124929 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/759c2685-a508-4824-9e22-1c18ca2e75ca-logs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.129818 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.129957 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-public-tls-certs\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.130402 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-config-data\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.130479 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.143981 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pchj6\" (UniqueName: \"kubernetes.io/projected/759c2685-a508-4824-9e22-1c18ca2e75ca-kube-api-access-pchj6\") pod \"watcher-api-0\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.211197 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.496204 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" event={"ID":"07b3f8ac-cae7-400a-bdf5-5768e5b74f79","Type":"ContainerStarted","Data":"e4619fbec221700042098cfae05b995c1e6b171efee5d910f73d2a991a6b2e2f"} Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.498209 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c"} Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.500913 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05b9cde-39ac-43bf-aff2-85f5b1d2acae","Type":"ContainerDied","Data":"ddd82a8900d5363810db91bd5200432e98766760ab0f83e0f40b21b7798ac7d3"} Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.500967 4897 scope.go:117] "RemoveContainer" containerID="144ce0b87ac18e9815347f1a56e6a1a6695674a8f4f1e1c1de43c5bab1636154" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.500925 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: E0228 13:39:35.507542 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.525511 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" podStartSLOduration=5.170834918 podStartE2EDuration="14.525490928s" podCreationTimestamp="2026-02-28 13:39:21 +0000 UTC" firstStartedPulling="2026-02-28 13:39:25.027491977 +0000 UTC m=+1379.269812654" lastFinishedPulling="2026-02-28 13:39:34.382148007 +0000 UTC m=+1388.624468664" observedRunningTime="2026-02-28 13:39:35.51468639 +0000 UTC m=+1389.757007047" watchObservedRunningTime="2026-02-28 13:39:35.525490928 +0000 UTC m=+1389.767811585" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.587120 4897 scope.go:117] "RemoveContainer" containerID="250c6a28fc56f09d45c62ecf9b6c012dd971f01783deaa5f8717a511305060e0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.614598 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.625873 4897 scope.go:117] "RemoveContainer" containerID="ca52dbc4d3af48283acf71aae46b20c8f9521e1b6d9a67d2467df347da905fe5" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.627728 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.639022 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.643854 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.646918 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.647511 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.661634 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:35 crc kubenswrapper[4897]: W0228 13:39:35.677694 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod759c2685_a508_4824_9e22_1c18ca2e75ca.slice/crio-a03f4deec48cbae51eb34537b59f32e818350e850a8858319d3f80f238b54680 WatchSource:0}: Error finding container a03f4deec48cbae51eb34537b59f32e818350e850a8858319d3f80f238b54680: Status 404 returned error can't find the container with id a03f4deec48cbae51eb34537b59f32e818350e850a8858319d3f80f238b54680 Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.680575 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.737219 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-log-httpd\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.737275 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.737335 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-run-httpd\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.737373 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-config-data\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.737437 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-scripts\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.737488 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2smc\" (UniqueName: \"kubernetes.io/projected/5f79a822-c955-4cfa-a75d-fd1784834f99-kube-api-access-w2smc\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.737505 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.839109 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2smc\" (UniqueName: \"kubernetes.io/projected/5f79a822-c955-4cfa-a75d-fd1784834f99-kube-api-access-w2smc\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.839418 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.839547 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-log-httpd\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.839630 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.839721 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-run-httpd\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.839811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-config-data\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.839943 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-scripts\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.840117 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-log-httpd\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.840213 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-run-httpd\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.847783 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-scripts\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.850891 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.857501 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.874162 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2smc\" (UniqueName: \"kubernetes.io/projected/5f79a822-c955-4cfa-a75d-fd1784834f99-kube-api-access-w2smc\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.874906 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-config-data\") pod \"ceilometer-0\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " pod="openstack/ceilometer-0" Feb 28 13:39:35 crc kubenswrapper[4897]: I0228 13:39:35.992677 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.478719 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4814ed83-bcac-465c-aaf6-b2acde9b0e13" path="/var/lib/kubelet/pods/4814ed83-bcac-465c-aaf6-b2acde9b0e13/volumes" Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.480054 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f05b9cde-39ac-43bf-aff2-85f5b1d2acae" path="/var/lib/kubelet/pods/f05b9cde-39ac-43bf-aff2-85f5b1d2acae/volumes" Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.483491 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.517794 4897 generic.go:334] "Generic (PLEG): container finished" podID="2ff09d8c-69de-4c11-8e94-90fce8f42387" containerID="7904a3f40492adfac8f92568a394efa044ff0272bcfa724e20ae0aa6404e1333" exitCode=0 Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.517922 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"2ff09d8c-69de-4c11-8e94-90fce8f42387","Type":"ContainerDied","Data":"7904a3f40492adfac8f92568a394efa044ff0272bcfa724e20ae0aa6404e1333"} Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.527596 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"759c2685-a508-4824-9e22-1c18ca2e75ca","Type":"ContainerStarted","Data":"5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066"} Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.527635 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"759c2685-a508-4824-9e22-1c18ca2e75ca","Type":"ContainerStarted","Data":"49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61"} Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.527643 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"759c2685-a508-4824-9e22-1c18ca2e75ca","Type":"ContainerStarted","Data":"a03f4deec48cbae51eb34537b59f32e818350e850a8858319d3f80f238b54680"} Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.528440 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.549716 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.549699788 podStartE2EDuration="2.549699788s" podCreationTimestamp="2026-02-28 13:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:36.548986309 +0000 UTC m=+1390.791306966" watchObservedRunningTime="2026-02-28 13:39:36.549699788 +0000 UTC m=+1390.792020445" Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.590866 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f79a822-c955-4cfa-a75d-fd1784834f99","Type":"ContainerStarted","Data":"d7167666835c688c12abc124ea63027322b5eee04ec512954b7c6bcaed18856c"} Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.826398 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.902791 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.967450 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trtv2\" (UniqueName: \"kubernetes.io/projected/2ff09d8c-69de-4c11-8e94-90fce8f42387-kube-api-access-trtv2\") pod \"2ff09d8c-69de-4c11-8e94-90fce8f42387\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.967562 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ff09d8c-69de-4c11-8e94-90fce8f42387-logs\") pod \"2ff09d8c-69de-4c11-8e94-90fce8f42387\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.967596 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-config-data\") pod \"2ff09d8c-69de-4c11-8e94-90fce8f42387\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.967719 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-combined-ca-bundle\") pod \"2ff09d8c-69de-4c11-8e94-90fce8f42387\" (UID: \"2ff09d8c-69de-4c11-8e94-90fce8f42387\") " Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.969555 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff09d8c-69de-4c11-8e94-90fce8f42387-logs" (OuterVolumeSpecName: "logs") pod "2ff09d8c-69de-4c11-8e94-90fce8f42387" (UID: "2ff09d8c-69de-4c11-8e94-90fce8f42387"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.977492 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff09d8c-69de-4c11-8e94-90fce8f42387-kube-api-access-trtv2" (OuterVolumeSpecName: "kube-api-access-trtv2") pod "2ff09d8c-69de-4c11-8e94-90fce8f42387" (UID: "2ff09d8c-69de-4c11-8e94-90fce8f42387"). InnerVolumeSpecName "kube-api-access-trtv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:36 crc kubenswrapper[4897]: I0228 13:39:36.997641 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ff09d8c-69de-4c11-8e94-90fce8f42387" (UID: "2ff09d8c-69de-4c11-8e94-90fce8f42387"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.019562 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-config-data" (OuterVolumeSpecName: "config-data") pod "2ff09d8c-69de-4c11-8e94-90fce8f42387" (UID: "2ff09d8c-69de-4c11-8e94-90fce8f42387"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.070335 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trtv2\" (UniqueName: \"kubernetes.io/projected/2ff09d8c-69de-4c11-8e94-90fce8f42387-kube-api-access-trtv2\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.070362 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ff09d8c-69de-4c11-8e94-90fce8f42387-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.070371 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.070382 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff09d8c-69de-4c11-8e94-90fce8f42387-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:37 crc kubenswrapper[4897]: E0228 13:39:37.457693 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.570533 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.617343 4897 generic.go:334] "Generic (PLEG): container finished" podID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerID="aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee" exitCode=0 Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.617399 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.617432 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerDied","Data":"aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee"} Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.617484 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b88f822-8f2a-473a-b388-b144a37ba4f0","Type":"ContainerDied","Data":"3d6214c4385cf6bd8770f8f0a8389c3a3516af10f5656ab02edaaa99dba58c77"} Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.617503 4897 scope.go:117] "RemoveContainer" containerID="aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.620505 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f79a822-c955-4cfa-a75d-fd1784834f99","Type":"ContainerStarted","Data":"5d834faf0a2a964bd1a733d6a451c5f5cf501ca1580df462e49e368f85e84643"} Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.620536 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f79a822-c955-4cfa-a75d-fd1784834f99","Type":"ContainerStarted","Data":"7402ed8cd9f105a778afcec0a107c1f88b1868bc72a3fd1276403b2e93f5e10a"} Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.622685 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"2ff09d8c-69de-4c11-8e94-90fce8f42387","Type":"ContainerDied","Data":"81a1812b7f7a5859516c7569aa842a1e401880fece7cec8b5a01cacb47701c80"} Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.622730 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.643292 4897 scope.go:117] "RemoveContainer" containerID="02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.670805 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.682087 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.686795 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-config-data\") pod \"2b88f822-8f2a-473a-b388-b144a37ba4f0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.686968 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkp4h\" (UniqueName: \"kubernetes.io/projected/2b88f822-8f2a-473a-b388-b144a37ba4f0-kube-api-access-jkp4h\") pod \"2b88f822-8f2a-473a-b388-b144a37ba4f0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.687006 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-combined-ca-bundle\") pod \"2b88f822-8f2a-473a-b388-b144a37ba4f0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.687057 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b88f822-8f2a-473a-b388-b144a37ba4f0-logs\") pod \"2b88f822-8f2a-473a-b388-b144a37ba4f0\" (UID: \"2b88f822-8f2a-473a-b388-b144a37ba4f0\") " Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.690116 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b88f822-8f2a-473a-b388-b144a37ba4f0-logs" (OuterVolumeSpecName: "logs") pod "2b88f822-8f2a-473a-b388-b144a37ba4f0" (UID: "2b88f822-8f2a-473a-b388-b144a37ba4f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.691289 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b88f822-8f2a-473a-b388-b144a37ba4f0-kube-api-access-jkp4h" (OuterVolumeSpecName: "kube-api-access-jkp4h") pod "2b88f822-8f2a-473a-b388-b144a37ba4f0" (UID: "2b88f822-8f2a-473a-b388-b144a37ba4f0"). InnerVolumeSpecName "kube-api-access-jkp4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.695200 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:39:37 crc kubenswrapper[4897]: E0228 13:39:37.695691 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.695704 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: E0228 13:39:37.695735 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.695741 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: E0228 13:39:37.695752 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.695758 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: E0228 13:39:37.695788 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff09d8c-69de-4c11-8e94-90fce8f42387" containerName="watcher-applier" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.695794 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff09d8c-69de-4c11-8e94-90fce8f42387" containerName="watcher-applier" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.696062 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff09d8c-69de-4c11-8e94-90fce8f42387" containerName="watcher-applier" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.696075 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.696090 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.696101 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.696109 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.696815 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.701665 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.707225 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.725960 4897 scope.go:117] "RemoveContainer" containerID="aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee" Feb 28 13:39:37 crc kubenswrapper[4897]: E0228 13:39:37.727326 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee\": container with ID starting with aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee not found: ID does not exist" containerID="aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.727368 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee"} err="failed to get container status \"aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee\": rpc error: code = NotFound desc = could not find container \"aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee\": container with ID starting with aa381f8a76b988b4c019f81f1a661a9ce66e1b95740f7ff34461cde5ce3a56ee not found: ID does not exist" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.727393 4897 scope.go:117] "RemoveContainer" containerID="02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530" Feb 28 13:39:37 crc kubenswrapper[4897]: E0228 13:39:37.728118 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530\": container with ID starting with 02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530 not found: ID does not exist" containerID="02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.728137 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530"} err="failed to get container status \"02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530\": rpc error: code = NotFound desc = could not find container \"02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530\": container with ID starting with 02bf3de735905cac84484f7527ca357452ce3c0a3a6cd83376b0ef258ed0c530 not found: ID does not exist" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.728151 4897 scope.go:117] "RemoveContainer" containerID="7904a3f40492adfac8f92568a394efa044ff0272bcfa724e20ae0aa6404e1333" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.756053 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-config-data" (OuterVolumeSpecName: "config-data") pod "2b88f822-8f2a-473a-b388-b144a37ba4f0" (UID: "2b88f822-8f2a-473a-b388-b144a37ba4f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.756138 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b88f822-8f2a-473a-b388-b144a37ba4f0" (UID: "2b88f822-8f2a-473a-b388-b144a37ba4f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.789290 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-logs\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.789363 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.789402 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-config-data\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.789448 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2tw\" (UniqueName: \"kubernetes.io/projected/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-kube-api-access-lc2tw\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.789700 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.789742 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkp4h\" (UniqueName: \"kubernetes.io/projected/2b88f822-8f2a-473a-b388-b144a37ba4f0-kube-api-access-jkp4h\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.789754 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f822-8f2a-473a-b388-b144a37ba4f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.789762 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b88f822-8f2a-473a-b388-b144a37ba4f0-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.891672 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc2tw\" (UniqueName: \"kubernetes.io/projected/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-kube-api-access-lc2tw\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.891827 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-logs\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.891879 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.891926 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-config-data\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.892721 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-logs\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.896995 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-config-data\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.898189 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.909842 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc2tw\" (UniqueName: \"kubernetes.io/projected/9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060-kube-api-access-lc2tw\") pod \"watcher-applier-0\" (UID: \"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060\") " pod="openstack/watcher-applier-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.958574 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.973302 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.989562 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:39:37 crc kubenswrapper[4897]: E0228 13:39:37.989993 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.990009 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" containerName="watcher-decision-engine" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.990895 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.993716 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 28 13:39:37 crc kubenswrapper[4897]: I0228 13:39:37.997656 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.090278 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.094796 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87t6m\" (UniqueName: \"kubernetes.io/projected/f645316a-2073-4db9-8ff9-a0af2afc7104-kube-api-access-87t6m\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.094860 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.094900 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.094934 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645316a-2073-4db9-8ff9-a0af2afc7104-logs\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.197797 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645316a-2073-4db9-8ff9-a0af2afc7104-logs\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.198143 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87t6m\" (UniqueName: \"kubernetes.io/projected/f645316a-2073-4db9-8ff9-a0af2afc7104-kube-api-access-87t6m\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.198241 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.198338 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.198610 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645316a-2073-4db9-8ff9-a0af2afc7104-logs\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.203848 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.204146 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.220949 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87t6m\" (UniqueName: \"kubernetes.io/projected/f645316a-2073-4db9-8ff9-a0af2afc7104-kube-api-access-87t6m\") pod \"watcher-decision-engine-0\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.347473 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.474616 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b88f822-8f2a-473a-b388-b144a37ba4f0" path="/var/lib/kubelet/pods/2b88f822-8f2a-473a-b388-b144a37ba4f0/volumes" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.475471 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff09d8c-69de-4c11-8e94-90fce8f42387" path="/var/lib/kubelet/pods/2ff09d8c-69de-4c11-8e94-90fce8f42387/volumes" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.570648 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 28 13:39:38 crc kubenswrapper[4897]: W0228 13:39:38.574798 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f2ebd5f_fa7e_4ca3_9bd9_4b54c05f8060.slice/crio-f7ac3258da63df168997aa3ef83205fc5cb9ba183a66ae14661038b6f1347478 WatchSource:0}: Error finding container f7ac3258da63df168997aa3ef83205fc5cb9ba183a66ae14661038b6f1347478: Status 404 returned error can't find the container with id f7ac3258da63df168997aa3ef83205fc5cb9ba183a66ae14661038b6f1347478 Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.659939 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f79a822-c955-4cfa-a75d-fd1784834f99","Type":"ContainerStarted","Data":"02ca35ac78e6dbd22eeea8d41400267b89528481e692758ebd8312a4bfc76e9e"} Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.667197 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060","Type":"ContainerStarted","Data":"f7ac3258da63df168997aa3ef83205fc5cb9ba183a66ae14661038b6f1347478"} Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.674848 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 13:39:38 crc kubenswrapper[4897]: I0228 13:39:38.849342 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:39:39 crc kubenswrapper[4897]: I0228 13:39:39.235786 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 28 13:39:39 crc kubenswrapper[4897]: I0228 13:39:39.719518 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f645316a-2073-4db9-8ff9-a0af2afc7104","Type":"ContainerStarted","Data":"38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc"} Feb 28 13:39:39 crc kubenswrapper[4897]: I0228 13:39:39.719826 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f645316a-2073-4db9-8ff9-a0af2afc7104","Type":"ContainerStarted","Data":"f2715d0948bf55e56abfae968054d9b0a3d5f30af0b9cea5e87d0a6011fd7863"} Feb 28 13:39:39 crc kubenswrapper[4897]: I0228 13:39:39.736209 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060","Type":"ContainerStarted","Data":"e122f9cfb86496242467552e72b593bf0b76c41e3d3936cb080cf19de85e6421"} Feb 28 13:39:39 crc kubenswrapper[4897]: I0228 13:39:39.749102 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.749088763 podStartE2EDuration="2.749088763s" podCreationTimestamp="2026-02-28 13:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:39.744757811 +0000 UTC m=+1393.987078468" watchObservedRunningTime="2026-02-28 13:39:39.749088763 +0000 UTC m=+1393.991409420" Feb 28 13:39:39 crc kubenswrapper[4897]: I0228 13:39:39.763442 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.763425251 podStartE2EDuration="2.763425251s" podCreationTimestamp="2026-02-28 13:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:39.759693815 +0000 UTC m=+1394.002014472" watchObservedRunningTime="2026-02-28 13:39:39.763425251 +0000 UTC m=+1394.005745908" Feb 28 13:39:40 crc kubenswrapper[4897]: E0228 13:39:40.042855 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:39:40 crc kubenswrapper[4897]: E0228 13:39:40.043052 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2smc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(5f79a822-c955-4cfa-a75d-fd1784834f99): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:39:40 crc kubenswrapper[4897]: E0228 13:39:40.044139 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" Feb 28 13:39:40 crc kubenswrapper[4897]: I0228 13:39:40.211903 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 28 13:39:40 crc kubenswrapper[4897]: I0228 13:39:40.744008 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="ceilometer-central-agent" containerID="cri-o://7402ed8cd9f105a778afcec0a107c1f88b1868bc72a3fd1276403b2e93f5e10a" gracePeriod=30 Feb 28 13:39:40 crc kubenswrapper[4897]: I0228 13:39:40.746076 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="sg-core" containerID="cri-o://02ca35ac78e6dbd22eeea8d41400267b89528481e692758ebd8312a4bfc76e9e" gracePeriod=30 Feb 28 13:39:40 crc kubenswrapper[4897]: I0228 13:39:40.746148 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="ceilometer-notification-agent" containerID="cri-o://5d834faf0a2a964bd1a733d6a451c5f5cf501ca1580df462e49e368f85e84643" gracePeriod=30 Feb 28 13:39:41 crc kubenswrapper[4897]: I0228 13:39:41.755730 4897 generic.go:334] "Generic (PLEG): container finished" podID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerID="02ca35ac78e6dbd22eeea8d41400267b89528481e692758ebd8312a4bfc76e9e" exitCode=2 Feb 28 13:39:41 crc kubenswrapper[4897]: I0228 13:39:41.756060 4897 generic.go:334] "Generic (PLEG): container finished" podID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerID="5d834faf0a2a964bd1a733d6a451c5f5cf501ca1580df462e49e368f85e84643" exitCode=0 Feb 28 13:39:41 crc kubenswrapper[4897]: I0228 13:39:41.755822 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f79a822-c955-4cfa-a75d-fd1784834f99","Type":"ContainerDied","Data":"02ca35ac78e6dbd22eeea8d41400267b89528481e692758ebd8312a4bfc76e9e"} Feb 28 13:39:41 crc kubenswrapper[4897]: I0228 13:39:41.756136 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f79a822-c955-4cfa-a75d-fd1784834f99","Type":"ContainerDied","Data":"5d834faf0a2a964bd1a733d6a451c5f5cf501ca1580df462e49e368f85e84643"} Feb 28 13:39:43 crc kubenswrapper[4897]: I0228 13:39:43.090569 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 28 13:39:45 crc kubenswrapper[4897]: I0228 13:39:45.212047 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 28 13:39:45 crc kubenswrapper[4897]: I0228 13:39:45.243094 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 28 13:39:45 crc kubenswrapper[4897]: I0228 13:39:45.805139 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 28 13:39:48 crc kubenswrapper[4897]: I0228 13:39:48.091770 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 28 13:39:48 crc kubenswrapper[4897]: I0228 13:39:48.127698 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 28 13:39:48 crc kubenswrapper[4897]: I0228 13:39:48.348076 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:48 crc kubenswrapper[4897]: I0228 13:39:48.375883 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:48 crc kubenswrapper[4897]: I0228 13:39:48.822954 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:48 crc kubenswrapper[4897]: I0228 13:39:48.862007 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 28 13:39:48 crc kubenswrapper[4897]: I0228 13:39:48.880909 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 28 13:39:49 crc kubenswrapper[4897]: E0228 13:39:49.458442 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.834233 4897 generic.go:334] "Generic (PLEG): container finished" podID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerID="7402ed8cd9f105a778afcec0a107c1f88b1868bc72a3fd1276403b2e93f5e10a" exitCode=0 Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.834289 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f79a822-c955-4cfa-a75d-fd1784834f99","Type":"ContainerDied","Data":"7402ed8cd9f105a778afcec0a107c1f88b1868bc72a3fd1276403b2e93f5e10a"} Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.834670 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f79a822-c955-4cfa-a75d-fd1784834f99","Type":"ContainerDied","Data":"d7167666835c688c12abc124ea63027322b5eee04ec512954b7c6bcaed18856c"} Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.834695 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7167666835c688c12abc124ea63027322b5eee04ec512954b7c6bcaed18856c" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.882749 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.950795 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-config-data\") pod \"5f79a822-c955-4cfa-a75d-fd1784834f99\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.950852 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-log-httpd\") pod \"5f79a822-c955-4cfa-a75d-fd1784834f99\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.950936 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-combined-ca-bundle\") pod \"5f79a822-c955-4cfa-a75d-fd1784834f99\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.950982 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-run-httpd\") pod \"5f79a822-c955-4cfa-a75d-fd1784834f99\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.951045 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2smc\" (UniqueName: \"kubernetes.io/projected/5f79a822-c955-4cfa-a75d-fd1784834f99-kube-api-access-w2smc\") pod \"5f79a822-c955-4cfa-a75d-fd1784834f99\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.951068 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-scripts\") pod \"5f79a822-c955-4cfa-a75d-fd1784834f99\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.951497 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5f79a822-c955-4cfa-a75d-fd1784834f99" (UID: "5f79a822-c955-4cfa-a75d-fd1784834f99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.951541 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5f79a822-c955-4cfa-a75d-fd1784834f99" (UID: "5f79a822-c955-4cfa-a75d-fd1784834f99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.951139 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-sg-core-conf-yaml\") pod \"5f79a822-c955-4cfa-a75d-fd1784834f99\" (UID: \"5f79a822-c955-4cfa-a75d-fd1784834f99\") " Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.952425 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.952442 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f79a822-c955-4cfa-a75d-fd1784834f99-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.959576 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-scripts" (OuterVolumeSpecName: "scripts") pod "5f79a822-c955-4cfa-a75d-fd1784834f99" (UID: "5f79a822-c955-4cfa-a75d-fd1784834f99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.959731 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f79a822-c955-4cfa-a75d-fd1784834f99-kube-api-access-w2smc" (OuterVolumeSpecName: "kube-api-access-w2smc") pod "5f79a822-c955-4cfa-a75d-fd1784834f99" (UID: "5f79a822-c955-4cfa-a75d-fd1784834f99"). InnerVolumeSpecName "kube-api-access-w2smc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:49 crc kubenswrapper[4897]: I0228 13:39:49.988419 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5f79a822-c955-4cfa-a75d-fd1784834f99" (UID: "5f79a822-c955-4cfa-a75d-fd1784834f99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.014180 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-config-data" (OuterVolumeSpecName: "config-data") pod "5f79a822-c955-4cfa-a75d-fd1784834f99" (UID: "5f79a822-c955-4cfa-a75d-fd1784834f99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.015190 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f79a822-c955-4cfa-a75d-fd1784834f99" (UID: "5f79a822-c955-4cfa-a75d-fd1784834f99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.055517 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.055719 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.055733 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2smc\" (UniqueName: \"kubernetes.io/projected/5f79a822-c955-4cfa-a75d-fd1784834f99-kube-api-access-w2smc\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.055748 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.055759 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f79a822-c955-4cfa-a75d-fd1784834f99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:50 crc kubenswrapper[4897]: E0228 13:39:50.460771 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.846558 4897 generic.go:334] "Generic (PLEG): container finished" podID="07b3f8ac-cae7-400a-bdf5-5768e5b74f79" containerID="e4619fbec221700042098cfae05b995c1e6b171efee5d910f73d2a991a6b2e2f" exitCode=0 Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.846687 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" event={"ID":"07b3f8ac-cae7-400a-bdf5-5768e5b74f79","Type":"ContainerDied","Data":"e4619fbec221700042098cfae05b995c1e6b171efee5d910f73d2a991a6b2e2f"} Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.846724 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.918440 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.921823 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.969326 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:50 crc kubenswrapper[4897]: E0228 13:39:50.969699 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="sg-core" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.969716 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="sg-core" Feb 28 13:39:50 crc kubenswrapper[4897]: E0228 13:39:50.969729 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="ceilometer-central-agent" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.969735 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="ceilometer-central-agent" Feb 28 13:39:50 crc kubenswrapper[4897]: E0228 13:39:50.969755 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="ceilometer-notification-agent" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.969762 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="ceilometer-notification-agent" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.969949 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="sg-core" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.969958 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="ceilometer-central-agent" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.969966 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" containerName="ceilometer-notification-agent" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.971989 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.974303 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.974594 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 13:39:50 crc kubenswrapper[4897]: I0228 13:39:50.998906 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.074193 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-config-data\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.074295 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-run-httpd\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.074368 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.074460 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-scripts\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.074491 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.074539 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qr7w\" (UniqueName: \"kubernetes.io/projected/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-kube-api-access-9qr7w\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.074853 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-log-httpd\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.176811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-log-httpd\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.176922 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-config-data\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.176960 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-run-httpd\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.176982 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.177037 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-scripts\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.177055 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.177083 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qr7w\" (UniqueName: \"kubernetes.io/projected/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-kube-api-access-9qr7w\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.177534 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-run-httpd\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.178399 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-log-httpd\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.183373 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-scripts\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.184221 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-config-data\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.184237 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.203073 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.208057 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qr7w\" (UniqueName: \"kubernetes.io/projected/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-kube-api-access-9qr7w\") pod \"ceilometer-0\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.289800 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:39:51 crc kubenswrapper[4897]: W0228 13:39:51.794267 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod176f6ed0_e15f_4ee9_afd3_be29ff84d7dc.slice/crio-3908b7e0be9188ffd1cc2567eb66f477eccbbf4aa6e386052ff27f8da5b1faa5 WatchSource:0}: Error finding container 3908b7e0be9188ffd1cc2567eb66f477eccbbf4aa6e386052ff27f8da5b1faa5: Status 404 returned error can't find the container with id 3908b7e0be9188ffd1cc2567eb66f477eccbbf4aa6e386052ff27f8da5b1faa5 Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.801545 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:39:51 crc kubenswrapper[4897]: I0228 13:39:51.858880 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc","Type":"ContainerStarted","Data":"3908b7e0be9188ffd1cc2567eb66f477eccbbf4aa6e386052ff27f8da5b1faa5"} Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.200001 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.296403 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-config-data\") pod \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.296947 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-scripts\") pod \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.297030 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gpsl\" (UniqueName: \"kubernetes.io/projected/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-kube-api-access-4gpsl\") pod \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.297092 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-combined-ca-bundle\") pod \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\" (UID: \"07b3f8ac-cae7-400a-bdf5-5768e5b74f79\") " Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.300801 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-scripts" (OuterVolumeSpecName: "scripts") pod "07b3f8ac-cae7-400a-bdf5-5768e5b74f79" (UID: "07b3f8ac-cae7-400a-bdf5-5768e5b74f79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.301071 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-kube-api-access-4gpsl" (OuterVolumeSpecName: "kube-api-access-4gpsl") pod "07b3f8ac-cae7-400a-bdf5-5768e5b74f79" (UID: "07b3f8ac-cae7-400a-bdf5-5768e5b74f79"). InnerVolumeSpecName "kube-api-access-4gpsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.327642 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07b3f8ac-cae7-400a-bdf5-5768e5b74f79" (UID: "07b3f8ac-cae7-400a-bdf5-5768e5b74f79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.328713 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-config-data" (OuterVolumeSpecName: "config-data") pod "07b3f8ac-cae7-400a-bdf5-5768e5b74f79" (UID: "07b3f8ac-cae7-400a-bdf5-5768e5b74f79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.401041 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gpsl\" (UniqueName: \"kubernetes.io/projected/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-kube-api-access-4gpsl\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.401087 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.401100 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.401112 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b3f8ac-cae7-400a-bdf5-5768e5b74f79-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.478075 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f79a822-c955-4cfa-a75d-fd1784834f99" path="/var/lib/kubelet/pods/5f79a822-c955-4cfa-a75d-fd1784834f99/volumes" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.873982 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc","Type":"ContainerStarted","Data":"353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511"} Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.874365 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc","Type":"ContainerStarted","Data":"de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf"} Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.876096 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" event={"ID":"07b3f8ac-cae7-400a-bdf5-5768e5b74f79","Type":"ContainerDied","Data":"745ee297748db77c3a46a3194fd7e241d1422fff5f70d607c3987da4c734901c"} Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.876123 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="745ee297748db77c3a46a3194fd7e241d1422fff5f70d607c3987da4c734901c" Feb 28 13:39:52 crc kubenswrapper[4897]: I0228 13:39:52.876211 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mmkk5" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.022499 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 13:39:53 crc kubenswrapper[4897]: E0228 13:39:53.023009 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b3f8ac-cae7-400a-bdf5-5768e5b74f79" containerName="nova-cell0-conductor-db-sync" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.023030 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b3f8ac-cae7-400a-bdf5-5768e5b74f79" containerName="nova-cell0-conductor-db-sync" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.023258 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b3f8ac-cae7-400a-bdf5-5768e5b74f79" containerName="nova-cell0-conductor-db-sync" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.024043 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.030081 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.058360 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.058720 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jf7vb" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.119113 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.119688 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk7tp\" (UniqueName: \"kubernetes.io/projected/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-kube-api-access-tk7tp\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.119890 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.221401 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk7tp\" (UniqueName: \"kubernetes.io/projected/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-kube-api-access-tk7tp\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.221510 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.221586 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.233096 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.247811 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.262838 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk7tp\" (UniqueName: \"kubernetes.io/projected/bf3d7f16-bcfc-4fa4-92d4-9b03f42375de-kube-api-access-tk7tp\") pod \"nova-cell0-conductor-0\" (UID: \"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de\") " pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.415513 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:53 crc kubenswrapper[4897]: W0228 13:39:53.873033 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf3d7f16_bcfc_4fa4_92d4_9b03f42375de.slice/crio-46053fb50e35c8dde2f87ee91a96ea69a2909cbb89d2e1ecec80e88cd09e0400 WatchSource:0}: Error finding container 46053fb50e35c8dde2f87ee91a96ea69a2909cbb89d2e1ecec80e88cd09e0400: Status 404 returned error can't find the container with id 46053fb50e35c8dde2f87ee91a96ea69a2909cbb89d2e1ecec80e88cd09e0400 Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.873122 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.893139 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc","Type":"ContainerStarted","Data":"bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3"} Feb 28 13:39:53 crc kubenswrapper[4897]: I0228 13:39:53.894746 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de","Type":"ContainerStarted","Data":"46053fb50e35c8dde2f87ee91a96ea69a2909cbb89d2e1ecec80e88cd09e0400"} Feb 28 13:39:54 crc kubenswrapper[4897]: I0228 13:39:54.909484 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf3d7f16-bcfc-4fa4-92d4-9b03f42375de","Type":"ContainerStarted","Data":"baa70f57a01beabe67186e42ad083d3d8f55acd6a5bf132aba7967b61136ed24"} Feb 28 13:39:54 crc kubenswrapper[4897]: I0228 13:39:54.909873 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 28 13:39:54 crc kubenswrapper[4897]: I0228 13:39:54.935843 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.935814306 podStartE2EDuration="2.935814306s" podCreationTimestamp="2026-02-28 13:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:39:54.92859178 +0000 UTC m=+1409.170912437" watchObservedRunningTime="2026-02-28 13:39:54.935814306 +0000 UTC m=+1409.178134973" Feb 28 13:39:55 crc kubenswrapper[4897]: E0228 13:39:55.370962 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:39:55 crc kubenswrapper[4897]: E0228 13:39:55.371160 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qr7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(176f6ed0-e15f-4ee9-afd3-be29ff84d7dc): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:39:55 crc kubenswrapper[4897]: E0228 13:39:55.372384 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" Feb 28 13:39:55 crc kubenswrapper[4897]: E0228 13:39:55.925202 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.175742 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538100-v4j6s"] Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.177579 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.180026 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.180385 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.180430 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.193885 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538100-v4j6s"] Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.257982 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqzd\" (UniqueName: \"kubernetes.io/projected/b6642318-7bfd-49f2-86e3-0fe4a7ec2709-kube-api-access-zpqzd\") pod \"auto-csr-approver-29538100-v4j6s\" (UID: \"b6642318-7bfd-49f2-86e3-0fe4a7ec2709\") " pod="openshift-infra/auto-csr-approver-29538100-v4j6s" Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.360260 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqzd\" (UniqueName: \"kubernetes.io/projected/b6642318-7bfd-49f2-86e3-0fe4a7ec2709-kube-api-access-zpqzd\") pod \"auto-csr-approver-29538100-v4j6s\" (UID: \"b6642318-7bfd-49f2-86e3-0fe4a7ec2709\") " pod="openshift-infra/auto-csr-approver-29538100-v4j6s" Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.387943 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqzd\" (UniqueName: \"kubernetes.io/projected/b6642318-7bfd-49f2-86e3-0fe4a7ec2709-kube-api-access-zpqzd\") pod \"auto-csr-approver-29538100-v4j6s\" (UID: \"b6642318-7bfd-49f2-86e3-0fe4a7ec2709\") " pod="openshift-infra/auto-csr-approver-29538100-v4j6s" Feb 28 13:40:00 crc kubenswrapper[4897]: E0228 13:40:00.460012 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:40:00 crc kubenswrapper[4897]: I0228 13:40:00.513213 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" Feb 28 13:40:01 crc kubenswrapper[4897]: I0228 13:40:01.063994 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538100-v4j6s"] Feb 28 13:40:01 crc kubenswrapper[4897]: I0228 13:40:01.987047 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" event={"ID":"b6642318-7bfd-49f2-86e3-0fe4a7ec2709","Type":"ContainerStarted","Data":"e5a370cf2ed739f1193a2330cb570b34e39933c9313b910dffe71d078a1a324e"} Feb 28 13:40:02 crc kubenswrapper[4897]: E0228 13:40:02.374810 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:40:02 crc kubenswrapper[4897]: E0228 13:40:02.374985 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:40:02 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:40:02 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpqzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538100-v4j6s_openshift-infra(b6642318-7bfd-49f2-86e3-0fe4a7ec2709): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:40:02 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:40:02 crc kubenswrapper[4897]: E0228 13:40:02.376275 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:40:02 crc kubenswrapper[4897]: E0228 13:40:02.997713 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:40:03 crc kubenswrapper[4897]: I0228 13:40:03.465345 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.055291 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gbcft"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.057103 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.062618 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.063487 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.077395 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gbcft"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.241582 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.241618 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-config-data\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.241704 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-scripts\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.241830 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8zb\" (UniqueName: \"kubernetes.io/projected/983b1a77-ab11-41df-b954-a8726742f9e5-kube-api-access-ld8zb\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.264290 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.266054 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.267915 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.272864 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.343981 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8zb\" (UniqueName: \"kubernetes.io/projected/983b1a77-ab11-41df-b954-a8726742f9e5-kube-api-access-ld8zb\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.344092 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.344114 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-config-data\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.344163 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-scripts\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.351025 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-config-data\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.360749 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.363919 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.378810 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.387791 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.388972 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-scripts\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.393602 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8zb\" (UniqueName: \"kubernetes.io/projected/983b1a77-ab11-41df-b954-a8726742f9e5-kube-api-access-ld8zb\") pod \"nova-cell0-cell-mapping-gbcft\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.436467 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.446367 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-logs\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.446438 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5jmq\" (UniqueName: \"kubernetes.io/projected/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-kube-api-access-x5jmq\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.446469 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.446501 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: E0228 13:40:04.484598 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.521380 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.523523 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.523627 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.530726 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.549787 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37805177-720e-43d9-8ab2-c663fe7a0738-logs\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.550056 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7md7\" (UniqueName: \"kubernetes.io/projected/37805177-720e-43d9-8ab2-c663fe7a0738-kube-api-access-d7md7\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.550164 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-logs\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.550275 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.550394 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5jmq\" (UniqueName: \"kubernetes.io/projected/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-kube-api-access-x5jmq\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.550486 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-config-data\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.550620 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.550714 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.550825 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-logs\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.608934 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.610437 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.618741 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5jmq\" (UniqueName: \"kubernetes.io/projected/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-kube-api-access-x5jmq\") pod \"nova-api-0\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.631384 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cbd9f89f7-sx96f"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.637477 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.653880 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrtg\" (UniqueName: \"kubernetes.io/projected/e8e56c81-55af-4ad1-95d8-06dc87adf02b-kube-api-access-pmrtg\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.653942 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-config-data\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.654042 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37805177-720e-43d9-8ab2-c663fe7a0738-logs\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.654068 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7md7\" (UniqueName: \"kubernetes.io/projected/37805177-720e-43d9-8ab2-c663fe7a0738-kube-api-access-d7md7\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.654095 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.654128 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.654166 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-config-data\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.654696 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37805177-720e-43d9-8ab2-c663fe7a0738-logs\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.658845 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-config-data\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.663868 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.673945 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cbd9f89f7-sx96f"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.677256 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7md7\" (UniqueName: \"kubernetes.io/projected/37805177-720e-43d9-8ab2-c663fe7a0738-kube-api-access-d7md7\") pod \"nova-metadata-0\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.693175 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.702642 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.703906 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.710856 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.743457 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755358 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-config\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755399 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755461 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755489 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-swift-storage-0\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755513 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsv68\" (UniqueName: \"kubernetes.io/projected/e061011e-e58b-458e-aba8-8e0ace759117-kube-api-access-fsv68\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755544 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmrtg\" (UniqueName: \"kubernetes.io/projected/e8e56c81-55af-4ad1-95d8-06dc87adf02b-kube-api-access-pmrtg\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755562 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7r4\" (UniqueName: \"kubernetes.io/projected/622fb260-2971-4b10-b1f4-b52bfd89de49-kube-api-access-sq7r4\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755594 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-config-data\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755614 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-svc\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755638 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755660 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.755703 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.762558 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.766627 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-config-data\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.777870 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmrtg\" (UniqueName: \"kubernetes.io/projected/e8e56c81-55af-4ad1-95d8-06dc87adf02b-kube-api-access-pmrtg\") pod \"nova-scheduler-0\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.821061 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.864494 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq7r4\" (UniqueName: \"kubernetes.io/projected/622fb260-2971-4b10-b1f4-b52bfd89de49-kube-api-access-sq7r4\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.864914 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-svc\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.864957 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.865387 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.865884 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-svc\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.865889 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.865973 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.866034 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-config\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.866113 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.866139 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-swift-storage-0\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.866173 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsv68\" (UniqueName: \"kubernetes.io/projected/e061011e-e58b-458e-aba8-8e0ace759117-kube-api-access-fsv68\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.867128 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.870958 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-swift-storage-0\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.872435 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.874224 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.881762 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.883740 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-config\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.886677 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsv68\" (UniqueName: \"kubernetes.io/projected/e061011e-e58b-458e-aba8-8e0ace759117-kube-api-access-fsv68\") pod \"dnsmasq-dns-5cbd9f89f7-sx96f\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.889424 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq7r4\" (UniqueName: \"kubernetes.io/projected/622fb260-2971-4b10-b1f4-b52bfd89de49-kube-api-access-sq7r4\") pod \"nova-cell1-novncproxy-0\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.984397 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:40:04 crc kubenswrapper[4897]: I0228 13:40:04.998183 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.028187 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.195717 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gbcft"] Feb 28 13:40:05 crc kubenswrapper[4897]: W0228 13:40:05.241245 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod983b1a77_ab11_41df_b954_a8726742f9e5.slice/crio-b3140e568b5ee75dba9c799deb3803c960ece028875be90a8cc623b9f31a7313 WatchSource:0}: Error finding container b3140e568b5ee75dba9c799deb3803c960ece028875be90a8cc623b9f31a7313: Status 404 returned error can't find the container with id b3140e568b5ee75dba9c799deb3803c960ece028875be90a8cc623b9f31a7313 Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.450541 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.531288 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.732757 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rctbp"] Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.739765 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.743206 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.743592 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.757143 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rctbp"] Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.792821 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-scripts\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.792925 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6gx\" (UniqueName: \"kubernetes.io/projected/8c685d21-3cda-45f7-8486-5bb236b5eb43-kube-api-access-9k6gx\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.792955 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.793011 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-config-data\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.896632 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-scripts\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.896756 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6gx\" (UniqueName: \"kubernetes.io/projected/8c685d21-3cda-45f7-8486-5bb236b5eb43-kube-api-access-9k6gx\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.896776 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.896836 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-config-data\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.903628 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-config-data\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.904238 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-scripts\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.913918 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.925287 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6gx\" (UniqueName: \"kubernetes.io/projected/8c685d21-3cda-45f7-8486-5bb236b5eb43-kube-api-access-9k6gx\") pod \"nova-cell1-conductor-db-sync-rctbp\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:05 crc kubenswrapper[4897]: I0228 13:40:05.971481 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.046641 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37805177-720e-43d9-8ab2-c663fe7a0738","Type":"ContainerStarted","Data":"f50d92fd3e234f3a8e0ffdc26f77e214e0fbdb34a38556404c7ee93ce40ffec2"} Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.048847 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gbcft" event={"ID":"983b1a77-ab11-41df-b954-a8726742f9e5","Type":"ContainerStarted","Data":"9065bcb0878b4ce3fb9d42c6f4b45270a042dbbc06d0b14c1406475a734ab4ba"} Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.048893 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gbcft" event={"ID":"983b1a77-ab11-41df-b954-a8726742f9e5","Type":"ContainerStarted","Data":"b3140e568b5ee75dba9c799deb3803c960ece028875be90a8cc623b9f31a7313"} Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.051761 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8e56c81-55af-4ad1-95d8-06dc87adf02b","Type":"ContainerStarted","Data":"30bf1d56b4d9468dea36aca7f38d85cafc9d61dcb34fedcb2b926a873b886874"} Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.052752 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937","Type":"ContainerStarted","Data":"3765cb7d82aec0207a918f1aa0a601b7f37f7c4ab19d6029b85f315b1217ce94"} Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.072817 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gbcft" podStartSLOduration=2.072794068 podStartE2EDuration="2.072794068s" podCreationTimestamp="2026-02-28 13:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:06.063411198 +0000 UTC m=+1420.305731865" watchObservedRunningTime="2026-02-28 13:40:06.072794068 +0000 UTC m=+1420.315114725" Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.085772 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:06 crc kubenswrapper[4897]: W0228 13:40:06.086607 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod622fb260_2971_4b10_b1f4_b52bfd89de49.slice/crio-fded66dd44be27eac5f0ec6dfe40289c0a91c3afe59e3901e464b1f6f13086ce WatchSource:0}: Error finding container fded66dd44be27eac5f0ec6dfe40289c0a91c3afe59e3901e464b1f6f13086ce: Status 404 returned error can't find the container with id fded66dd44be27eac5f0ec6dfe40289c0a91c3afe59e3901e464b1f6f13086ce Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.091545 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.252226 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cbd9f89f7-sx96f"] Feb 28 13:40:06 crc kubenswrapper[4897]: I0228 13:40:06.630658 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rctbp"] Feb 28 13:40:07 crc kubenswrapper[4897]: I0228 13:40:07.064890 4897 generic.go:334] "Generic (PLEG): container finished" podID="e061011e-e58b-458e-aba8-8e0ace759117" containerID="ff5a3aa8da48ae602c1e71a518e88d7ef2ec3938afe38f831efe7f7d1dc8a26b" exitCode=0 Feb 28 13:40:07 crc kubenswrapper[4897]: I0228 13:40:07.064957 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" event={"ID":"e061011e-e58b-458e-aba8-8e0ace759117","Type":"ContainerDied","Data":"ff5a3aa8da48ae602c1e71a518e88d7ef2ec3938afe38f831efe7f7d1dc8a26b"} Feb 28 13:40:07 crc kubenswrapper[4897]: I0228 13:40:07.065398 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" event={"ID":"e061011e-e58b-458e-aba8-8e0ace759117","Type":"ContainerStarted","Data":"15ddc7ad9e5ff398dc46f297097503c571a504c286b229ab1286224115c31d32"} Feb 28 13:40:07 crc kubenswrapper[4897]: I0228 13:40:07.066940 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rctbp" event={"ID":"8c685d21-3cda-45f7-8486-5bb236b5eb43","Type":"ContainerStarted","Data":"a1a7c790e3f3d0123610eb071ce4ff3e279ba048b5ff74193d1755c6eab0d7ef"} Feb 28 13:40:07 crc kubenswrapper[4897]: I0228 13:40:07.069799 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"622fb260-2971-4b10-b1f4-b52bfd89de49","Type":"ContainerStarted","Data":"fded66dd44be27eac5f0ec6dfe40289c0a91c3afe59e3901e464b1f6f13086ce"} Feb 28 13:40:08 crc kubenswrapper[4897]: I0228 13:40:08.161529 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:08 crc kubenswrapper[4897]: I0228 13:40:08.169724 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:08 crc kubenswrapper[4897]: I0228 13:40:08.177981 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rctbp" event={"ID":"8c685d21-3cda-45f7-8486-5bb236b5eb43","Type":"ContainerStarted","Data":"3470dcefe4ba195eb4155138877e8e98329abed179bb5261587f606a44b8755f"} Feb 28 13:40:08 crc kubenswrapper[4897]: I0228 13:40:08.212975 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rctbp" podStartSLOduration=3.212954504 podStartE2EDuration="3.212954504s" podCreationTimestamp="2026-02-28 13:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:08.205251812 +0000 UTC m=+1422.447572469" watchObservedRunningTime="2026-02-28 13:40:08.212954504 +0000 UTC m=+1422.455275161" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.200245 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"622fb260-2971-4b10-b1f4-b52bfd89de49","Type":"ContainerStarted","Data":"3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0"} Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.200432 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="622fb260-2971-4b10-b1f4-b52bfd89de49" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0" gracePeriod=30 Feb 28 13:40:10 crc kubenswrapper[4897]: E0228 13:40:10.205017 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.205170 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37805177-720e-43d9-8ab2-c663fe7a0738","Type":"ContainerStarted","Data":"3166f2b77bc618962d66aad05ead71c25f214b34f6fb8b50f6edc1a73f56aeeb"} Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.205206 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37805177-720e-43d9-8ab2-c663fe7a0738","Type":"ContainerStarted","Data":"66cedd955cd75e35c4957b031791986ae71d195923c811029895d1d479a02f9c"} Feb 28 13:40:10 crc kubenswrapper[4897]: E0228 13:40:10.205215 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qr7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(176f6ed0-e15f-4ee9-afd3-be29ff84d7dc): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.205231 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" containerName="nova-metadata-log" containerID="cri-o://66cedd955cd75e35c4957b031791986ae71d195923c811029895d1d479a02f9c" gracePeriod=30 Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.205271 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" containerName="nova-metadata-metadata" containerID="cri-o://3166f2b77bc618962d66aad05ead71c25f214b34f6fb8b50f6edc1a73f56aeeb" gracePeriod=30 Feb 28 13:40:10 crc kubenswrapper[4897]: E0228 13:40:10.206362 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.217110 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8e56c81-55af-4ad1-95d8-06dc87adf02b","Type":"ContainerStarted","Data":"e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc"} Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.221750 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" event={"ID":"e061011e-e58b-458e-aba8-8e0ace759117","Type":"ContainerStarted","Data":"c3632e4a3c7ef8eeab10572c630804218648e3d70abb15feafefdbeecc990345"} Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.221788 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.229672 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937","Type":"ContainerStarted","Data":"569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002"} Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.229713 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937","Type":"ContainerStarted","Data":"b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003"} Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.240509 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.047766988 podStartE2EDuration="6.240490986s" podCreationTimestamp="2026-02-28 13:40:04 +0000 UTC" firstStartedPulling="2026-02-28 13:40:06.088504541 +0000 UTC m=+1420.330825198" lastFinishedPulling="2026-02-28 13:40:09.281228529 +0000 UTC m=+1423.523549196" observedRunningTime="2026-02-28 13:40:10.226451232 +0000 UTC m=+1424.468771889" watchObservedRunningTime="2026-02-28 13:40:10.240490986 +0000 UTC m=+1424.482811643" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.249776 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.479526719 podStartE2EDuration="6.249756113s" podCreationTimestamp="2026-02-28 13:40:04 +0000 UTC" firstStartedPulling="2026-02-28 13:40:05.523950878 +0000 UTC m=+1419.766271535" lastFinishedPulling="2026-02-28 13:40:09.294180272 +0000 UTC m=+1423.536500929" observedRunningTime="2026-02-28 13:40:10.247602231 +0000 UTC m=+1424.489922888" watchObservedRunningTime="2026-02-28 13:40:10.249756113 +0000 UTC m=+1424.492076770" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.272974 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.976751194 podStartE2EDuration="6.272955941s" podCreationTimestamp="2026-02-28 13:40:04 +0000 UTC" firstStartedPulling="2026-02-28 13:40:05.987013109 +0000 UTC m=+1420.229333766" lastFinishedPulling="2026-02-28 13:40:09.283217826 +0000 UTC m=+1423.525538513" observedRunningTime="2026-02-28 13:40:10.268263446 +0000 UTC m=+1424.510584093" watchObservedRunningTime="2026-02-28 13:40:10.272955941 +0000 UTC m=+1424.515276598" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.323942 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.616501912 podStartE2EDuration="6.323923758s" podCreationTimestamp="2026-02-28 13:40:04 +0000 UTC" firstStartedPulling="2026-02-28 13:40:05.575763349 +0000 UTC m=+1419.818084006" lastFinishedPulling="2026-02-28 13:40:09.283185195 +0000 UTC m=+1423.525505852" observedRunningTime="2026-02-28 13:40:10.322838207 +0000 UTC m=+1424.565158864" watchObservedRunningTime="2026-02-28 13:40:10.323923758 +0000 UTC m=+1424.566244415" Feb 28 13:40:10 crc kubenswrapper[4897]: I0228 13:40:10.342601 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" podStartSLOduration=6.342577245 podStartE2EDuration="6.342577245s" podCreationTimestamp="2026-02-28 13:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:10.291612828 +0000 UTC m=+1424.533933485" watchObservedRunningTime="2026-02-28 13:40:10.342577245 +0000 UTC m=+1424.584897902" Feb 28 13:40:10 crc kubenswrapper[4897]: E0228 13:40:10.457239 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37805177_720e_43d9_8ab2_c663fe7a0738.slice/crio-66cedd955cd75e35c4957b031791986ae71d195923c811029895d1d479a02f9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37805177_720e_43d9_8ab2_c663fe7a0738.slice/crio-conmon-66cedd955cd75e35c4957b031791986ae71d195923c811029895d1d479a02f9c.scope\": RecentStats: unable to find data in memory cache]" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.248524 4897 generic.go:334] "Generic (PLEG): container finished" podID="37805177-720e-43d9-8ab2-c663fe7a0738" containerID="3166f2b77bc618962d66aad05ead71c25f214b34f6fb8b50f6edc1a73f56aeeb" exitCode=0 Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.248813 4897 generic.go:334] "Generic (PLEG): container finished" podID="37805177-720e-43d9-8ab2-c663fe7a0738" containerID="66cedd955cd75e35c4957b031791986ae71d195923c811029895d1d479a02f9c" exitCode=143 Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.248952 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37805177-720e-43d9-8ab2-c663fe7a0738","Type":"ContainerDied","Data":"3166f2b77bc618962d66aad05ead71c25f214b34f6fb8b50f6edc1a73f56aeeb"} Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.249022 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37805177-720e-43d9-8ab2-c663fe7a0738","Type":"ContainerDied","Data":"66cedd955cd75e35c4957b031791986ae71d195923c811029895d1d479a02f9c"} Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.693897 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.736164 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-combined-ca-bundle\") pod \"37805177-720e-43d9-8ab2-c663fe7a0738\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.736505 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37805177-720e-43d9-8ab2-c663fe7a0738-logs\") pod \"37805177-720e-43d9-8ab2-c663fe7a0738\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.736603 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-config-data\") pod \"37805177-720e-43d9-8ab2-c663fe7a0738\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.736642 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7md7\" (UniqueName: \"kubernetes.io/projected/37805177-720e-43d9-8ab2-c663fe7a0738-kube-api-access-d7md7\") pod \"37805177-720e-43d9-8ab2-c663fe7a0738\" (UID: \"37805177-720e-43d9-8ab2-c663fe7a0738\") " Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.737139 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37805177-720e-43d9-8ab2-c663fe7a0738-logs" (OuterVolumeSpecName: "logs") pod "37805177-720e-43d9-8ab2-c663fe7a0738" (UID: "37805177-720e-43d9-8ab2-c663fe7a0738"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.748540 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37805177-720e-43d9-8ab2-c663fe7a0738-kube-api-access-d7md7" (OuterVolumeSpecName: "kube-api-access-d7md7") pod "37805177-720e-43d9-8ab2-c663fe7a0738" (UID: "37805177-720e-43d9-8ab2-c663fe7a0738"). InnerVolumeSpecName "kube-api-access-d7md7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.767079 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-config-data" (OuterVolumeSpecName: "config-data") pod "37805177-720e-43d9-8ab2-c663fe7a0738" (UID: "37805177-720e-43d9-8ab2-c663fe7a0738"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.768532 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37805177-720e-43d9-8ab2-c663fe7a0738" (UID: "37805177-720e-43d9-8ab2-c663fe7a0738"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.838720 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37805177-720e-43d9-8ab2-c663fe7a0738-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.838752 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.838762 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7md7\" (UniqueName: \"kubernetes.io/projected/37805177-720e-43d9-8ab2-c663fe7a0738-kube-api-access-d7md7\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:11 crc kubenswrapper[4897]: I0228 13:40:11.838771 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37805177-720e-43d9-8ab2-c663fe7a0738-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.269171 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37805177-720e-43d9-8ab2-c663fe7a0738","Type":"ContainerDied","Data":"f50d92fd3e234f3a8e0ffdc26f77e214e0fbdb34a38556404c7ee93ce40ffec2"} Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.269262 4897 scope.go:117] "RemoveContainer" containerID="3166f2b77bc618962d66aad05ead71c25f214b34f6fb8b50f6edc1a73f56aeeb" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.269644 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.299513 4897 scope.go:117] "RemoveContainer" containerID="66cedd955cd75e35c4957b031791986ae71d195923c811029895d1d479a02f9c" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.332753 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.358167 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.376341 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:12 crc kubenswrapper[4897]: E0228 13:40:12.376765 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" containerName="nova-metadata-metadata" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.376780 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" containerName="nova-metadata-metadata" Feb 28 13:40:12 crc kubenswrapper[4897]: E0228 13:40:12.376811 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" containerName="nova-metadata-log" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.376825 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" containerName="nova-metadata-log" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.377007 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" containerName="nova-metadata-metadata" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.377032 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" containerName="nova-metadata-log" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.378081 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.382488 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.383953 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.385595 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:12 crc kubenswrapper[4897]: E0228 13:40:12.393176 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 28 13:40:12 crc kubenswrapper[4897]: E0228 13:40:12.393673 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:40:12 crc kubenswrapper[4897]: E0228 13:40:12.395786 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.448722 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.448791 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-config-data\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.449085 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rln\" (UniqueName: \"kubernetes.io/projected/034f5cfe-3216-44b1-9036-078917ab59bb-kube-api-access-p2rln\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.449236 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034f5cfe-3216-44b1-9036-078917ab59bb-logs\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.449261 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.466191 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37805177-720e-43d9-8ab2-c663fe7a0738" path="/var/lib/kubelet/pods/37805177-720e-43d9-8ab2-c663fe7a0738/volumes" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.551050 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.551348 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-config-data\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.551411 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rln\" (UniqueName: \"kubernetes.io/projected/034f5cfe-3216-44b1-9036-078917ab59bb-kube-api-access-p2rln\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.551454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034f5cfe-3216-44b1-9036-078917ab59bb-logs\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.551472 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.552437 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034f5cfe-3216-44b1-9036-078917ab59bb-logs\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.555213 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-config-data\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.555226 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.564256 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.574410 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rln\" (UniqueName: \"kubernetes.io/projected/034f5cfe-3216-44b1-9036-078917ab59bb-kube-api-access-p2rln\") pod \"nova-metadata-0\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " pod="openstack/nova-metadata-0" Feb 28 13:40:12 crc kubenswrapper[4897]: I0228 13:40:12.695900 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:13 crc kubenswrapper[4897]: I0228 13:40:13.185923 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:13 crc kubenswrapper[4897]: W0228 13:40:13.189533 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod034f5cfe_3216_44b1_9036_078917ab59bb.slice/crio-4ec7a67f2092d82ce733b6f64e44cb89cde71060da058fa56bde4777d181cd2e WatchSource:0}: Error finding container 4ec7a67f2092d82ce733b6f64e44cb89cde71060da058fa56bde4777d181cd2e: Status 404 returned error can't find the container with id 4ec7a67f2092d82ce733b6f64e44cb89cde71060da058fa56bde4777d181cd2e Feb 28 13:40:13 crc kubenswrapper[4897]: I0228 13:40:13.285535 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"034f5cfe-3216-44b1-9036-078917ab59bb","Type":"ContainerStarted","Data":"4ec7a67f2092d82ce733b6f64e44cb89cde71060da058fa56bde4777d181cd2e"} Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.301149 4897 generic.go:334] "Generic (PLEG): container finished" podID="983b1a77-ab11-41df-b954-a8726742f9e5" containerID="9065bcb0878b4ce3fb9d42c6f4b45270a042dbbc06d0b14c1406475a734ab4ba" exitCode=0 Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.301258 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gbcft" event={"ID":"983b1a77-ab11-41df-b954-a8726742f9e5","Type":"ContainerDied","Data":"9065bcb0878b4ce3fb9d42c6f4b45270a042dbbc06d0b14c1406475a734ab4ba"} Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.304980 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"034f5cfe-3216-44b1-9036-078917ab59bb","Type":"ContainerStarted","Data":"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f"} Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.305286 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"034f5cfe-3216-44b1-9036-078917ab59bb","Type":"ContainerStarted","Data":"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a"} Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.342950 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.342931119 podStartE2EDuration="2.342931119s" podCreationTimestamp="2026-02-28 13:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:14.336421181 +0000 UTC m=+1428.578741838" watchObservedRunningTime="2026-02-28 13:40:14.342931119 +0000 UTC m=+1428.585251776" Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.883205 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.883651 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.985398 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 13:40:14 crc kubenswrapper[4897]: I0228 13:40:14.985458 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.000568 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.029895 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.036101 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.099775 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cdbcc745-rbcfg"] Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.100392 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" podUID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" containerName="dnsmasq-dns" containerID="cri-o://6d63fe3b3d290fedab1c117f6f6e4c6410336b82bf87d96cef42f9946b8a4b81" gracePeriod=10 Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.328038 4897 generic.go:334] "Generic (PLEG): container finished" podID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" containerID="6d63fe3b3d290fedab1c117f6f6e4c6410336b82bf87d96cef42f9946b8a4b81" exitCode=0 Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.331771 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" event={"ID":"a5f83c96-ea10-4ba7-a711-5d87f1bf412e","Type":"ContainerDied","Data":"6d63fe3b3d290fedab1c117f6f6e4c6410336b82bf87d96cef42f9946b8a4b81"} Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.374566 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.820440 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.825573 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955072 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-config\") pod \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955168 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld8zb\" (UniqueName: \"kubernetes.io/projected/983b1a77-ab11-41df-b954-a8726742f9e5-kube-api-access-ld8zb\") pod \"983b1a77-ab11-41df-b954-a8726742f9e5\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955270 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-config-data\") pod \"983b1a77-ab11-41df-b954-a8726742f9e5\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955293 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-swift-storage-0\") pod \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955324 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-scripts\") pod \"983b1a77-ab11-41df-b954-a8726742f9e5\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955380 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-sb\") pod \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955470 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-nb\") pod \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955516 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-combined-ca-bundle\") pod \"983b1a77-ab11-41df-b954-a8726742f9e5\" (UID: \"983b1a77-ab11-41df-b954-a8726742f9e5\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955554 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-svc\") pod \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.955626 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqg2b\" (UniqueName: \"kubernetes.io/projected/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-kube-api-access-kqg2b\") pod \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\" (UID: \"a5f83c96-ea10-4ba7-a711-5d87f1bf412e\") " Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.969426 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.970391 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.981690 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-scripts" (OuterVolumeSpecName: "scripts") pod "983b1a77-ab11-41df-b954-a8726742f9e5" (UID: "983b1a77-ab11-41df-b954-a8726742f9e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.981911 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983b1a77-ab11-41df-b954-a8726742f9e5-kube-api-access-ld8zb" (OuterVolumeSpecName: "kube-api-access-ld8zb") pod "983b1a77-ab11-41df-b954-a8726742f9e5" (UID: "983b1a77-ab11-41df-b954-a8726742f9e5"). InnerVolumeSpecName "kube-api-access-ld8zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:15 crc kubenswrapper[4897]: I0228 13:40:15.986591 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-kube-api-access-kqg2b" (OuterVolumeSpecName: "kube-api-access-kqg2b") pod "a5f83c96-ea10-4ba7-a711-5d87f1bf412e" (UID: "a5f83c96-ea10-4ba7-a711-5d87f1bf412e"). InnerVolumeSpecName "kube-api-access-kqg2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.069573 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.069840 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqg2b\" (UniqueName: \"kubernetes.io/projected/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-kube-api-access-kqg2b\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.069852 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld8zb\" (UniqueName: \"kubernetes.io/projected/983b1a77-ab11-41df-b954-a8726742f9e5-kube-api-access-ld8zb\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.080498 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "983b1a77-ab11-41df-b954-a8726742f9e5" (UID: "983b1a77-ab11-41df-b954-a8726742f9e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.131984 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a5f83c96-ea10-4ba7-a711-5d87f1bf412e" (UID: "a5f83c96-ea10-4ba7-a711-5d87f1bf412e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.140771 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-config" (OuterVolumeSpecName: "config") pod "a5f83c96-ea10-4ba7-a711-5d87f1bf412e" (UID: "a5f83c96-ea10-4ba7-a711-5d87f1bf412e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.146985 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a5f83c96-ea10-4ba7-a711-5d87f1bf412e" (UID: "a5f83c96-ea10-4ba7-a711-5d87f1bf412e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.147055 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-config-data" (OuterVolumeSpecName: "config-data") pod "983b1a77-ab11-41df-b954-a8726742f9e5" (UID: "983b1a77-ab11-41df-b954-a8726742f9e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.171619 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.171652 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.171662 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.171671 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.171679 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983b1a77-ab11-41df-b954-a8726742f9e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.205581 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5f83c96-ea10-4ba7-a711-5d87f1bf412e" (UID: "a5f83c96-ea10-4ba7-a711-5d87f1bf412e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.207452 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a5f83c96-ea10-4ba7-a711-5d87f1bf412e" (UID: "a5f83c96-ea10-4ba7-a711-5d87f1bf412e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.273491 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.273524 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f83c96-ea10-4ba7-a711-5d87f1bf412e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.337579 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.337612 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cdbcc745-rbcfg" event={"ID":"a5f83c96-ea10-4ba7-a711-5d87f1bf412e","Type":"ContainerDied","Data":"9cf317770d72d1de71176453818a16568c95678c7d365435cf509e60450f057e"} Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.337686 4897 scope.go:117] "RemoveContainer" containerID="6d63fe3b3d290fedab1c117f6f6e4c6410336b82bf87d96cef42f9946b8a4b81" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.340095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gbcft" event={"ID":"983b1a77-ab11-41df-b954-a8726742f9e5","Type":"ContainerDied","Data":"b3140e568b5ee75dba9c799deb3803c960ece028875be90a8cc623b9f31a7313"} Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.340115 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gbcft" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.340132 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3140e568b5ee75dba9c799deb3803c960ece028875be90a8cc623b9f31a7313" Feb 28 13:40:16 crc kubenswrapper[4897]: E0228 13:40:16.348035 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:40:16 crc kubenswrapper[4897]: E0228 13:40:16.348168 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:40:16 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:40:16 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpqzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538100-v4j6s_openshift-infra(b6642318-7bfd-49f2-86e3-0fe4a7ec2709): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:40:16 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:40:16 crc kubenswrapper[4897]: E0228 13:40:16.349401 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.386442 4897 scope.go:117] "RemoveContainer" containerID="7c8672d83edb549270607a5be029baed438ce93ed4af26bc194443e6b3d4cecd" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.408262 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cdbcc745-rbcfg"] Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.419998 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cdbcc745-rbcfg"] Feb 28 13:40:16 crc kubenswrapper[4897]: E0228 13:40:16.466263 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.484686 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" path="/var/lib/kubelet/pods/a5f83c96-ea10-4ba7-a711-5d87f1bf412e/volumes" Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.538459 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.538747 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-log" containerID="cri-o://b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003" gracePeriod=30 Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.538811 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-api" containerID="cri-o://569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002" gracePeriod=30 Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.563292 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.577839 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.578261 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" containerName="nova-metadata-log" containerID="cri-o://ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a" gracePeriod=30 Feb 28 13:40:16 crc kubenswrapper[4897]: I0228 13:40:16.578799 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" containerName="nova-metadata-metadata" containerID="cri-o://36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f" gracePeriod=30 Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.093505 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.198139 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-nova-metadata-tls-certs\") pod \"034f5cfe-3216-44b1-9036-078917ab59bb\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.198221 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034f5cfe-3216-44b1-9036-078917ab59bb-logs\") pod \"034f5cfe-3216-44b1-9036-078917ab59bb\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.198246 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-combined-ca-bundle\") pod \"034f5cfe-3216-44b1-9036-078917ab59bb\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.198474 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-config-data\") pod \"034f5cfe-3216-44b1-9036-078917ab59bb\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.198515 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2rln\" (UniqueName: \"kubernetes.io/projected/034f5cfe-3216-44b1-9036-078917ab59bb-kube-api-access-p2rln\") pod \"034f5cfe-3216-44b1-9036-078917ab59bb\" (UID: \"034f5cfe-3216-44b1-9036-078917ab59bb\") " Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.200465 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/034f5cfe-3216-44b1-9036-078917ab59bb-logs" (OuterVolumeSpecName: "logs") pod "034f5cfe-3216-44b1-9036-078917ab59bb" (UID: "034f5cfe-3216-44b1-9036-078917ab59bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.204573 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/034f5cfe-3216-44b1-9036-078917ab59bb-kube-api-access-p2rln" (OuterVolumeSpecName: "kube-api-access-p2rln") pod "034f5cfe-3216-44b1-9036-078917ab59bb" (UID: "034f5cfe-3216-44b1-9036-078917ab59bb"). InnerVolumeSpecName "kube-api-access-p2rln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.241066 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "034f5cfe-3216-44b1-9036-078917ab59bb" (UID: "034f5cfe-3216-44b1-9036-078917ab59bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.241325 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-config-data" (OuterVolumeSpecName: "config-data") pod "034f5cfe-3216-44b1-9036-078917ab59bb" (UID: "034f5cfe-3216-44b1-9036-078917ab59bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.277391 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "034f5cfe-3216-44b1-9036-078917ab59bb" (UID: "034f5cfe-3216-44b1-9036-078917ab59bb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.301456 4897 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.301484 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034f5cfe-3216-44b1-9036-078917ab59bb-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.301495 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.301503 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034f5cfe-3216-44b1-9036-078917ab59bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.301512 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2rln\" (UniqueName: \"kubernetes.io/projected/034f5cfe-3216-44b1-9036-078917ab59bb-kube-api-access-p2rln\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.362091 4897 generic.go:334] "Generic (PLEG): container finished" podID="034f5cfe-3216-44b1-9036-078917ab59bb" containerID="36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f" exitCode=0 Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.362132 4897 generic.go:334] "Generic (PLEG): container finished" podID="034f5cfe-3216-44b1-9036-078917ab59bb" containerID="ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a" exitCode=143 Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.362148 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.362202 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"034f5cfe-3216-44b1-9036-078917ab59bb","Type":"ContainerDied","Data":"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f"} Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.362271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"034f5cfe-3216-44b1-9036-078917ab59bb","Type":"ContainerDied","Data":"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a"} Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.362286 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"034f5cfe-3216-44b1-9036-078917ab59bb","Type":"ContainerDied","Data":"4ec7a67f2092d82ce733b6f64e44cb89cde71060da058fa56bde4777d181cd2e"} Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.362332 4897 scope.go:117] "RemoveContainer" containerID="36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.373488 4897 generic.go:334] "Generic (PLEG): container finished" podID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerID="b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003" exitCode=143 Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.373545 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937","Type":"ContainerDied","Data":"b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003"} Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.378573 4897 generic.go:334] "Generic (PLEG): container finished" podID="8c685d21-3cda-45f7-8486-5bb236b5eb43" containerID="3470dcefe4ba195eb4155138877e8e98329abed179bb5261587f606a44b8755f" exitCode=0 Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.378725 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e8e56c81-55af-4ad1-95d8-06dc87adf02b" containerName="nova-scheduler-scheduler" containerID="cri-o://e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc" gracePeriod=30 Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.378793 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rctbp" event={"ID":"8c685d21-3cda-45f7-8486-5bb236b5eb43","Type":"ContainerDied","Data":"3470dcefe4ba195eb4155138877e8e98329abed179bb5261587f606a44b8755f"} Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.397263 4897 scope.go:117] "RemoveContainer" containerID="ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.403371 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.420526 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.458943 4897 scope.go:117] "RemoveContainer" containerID="36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f" Feb 28 13:40:17 crc kubenswrapper[4897]: E0228 13:40:17.459869 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f\": container with ID starting with 36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f not found: ID does not exist" containerID="36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.459925 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f"} err="failed to get container status \"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f\": rpc error: code = NotFound desc = could not find container \"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f\": container with ID starting with 36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f not found: ID does not exist" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.459958 4897 scope.go:117] "RemoveContainer" containerID="ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a" Feb 28 13:40:17 crc kubenswrapper[4897]: E0228 13:40:17.460951 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a\": container with ID starting with ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a not found: ID does not exist" containerID="ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.461018 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a"} err="failed to get container status \"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a\": rpc error: code = NotFound desc = could not find container \"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a\": container with ID starting with ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a not found: ID does not exist" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.461047 4897 scope.go:117] "RemoveContainer" containerID="36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.461515 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f"} err="failed to get container status \"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f\": rpc error: code = NotFound desc = could not find container \"36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f\": container with ID starting with 36a4cf815d28624d1f986e28eee20cde7e57b96fe9ce6d7faba48717328ca26f not found: ID does not exist" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.461535 4897 scope.go:117] "RemoveContainer" containerID="ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.461918 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a"} err="failed to get container status \"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a\": rpc error: code = NotFound desc = could not find container \"ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a\": container with ID starting with ea1ec315a196925db3dc97025b3add5d38e97cce46d22c82a453ca239e20566a not found: ID does not exist" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.476125 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:17 crc kubenswrapper[4897]: E0228 13:40:17.479185 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" containerName="init" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.479279 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" containerName="init" Feb 28 13:40:17 crc kubenswrapper[4897]: E0228 13:40:17.479301 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983b1a77-ab11-41df-b954-a8726742f9e5" containerName="nova-manage" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.479339 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="983b1a77-ab11-41df-b954-a8726742f9e5" containerName="nova-manage" Feb 28 13:40:17 crc kubenswrapper[4897]: E0228 13:40:17.479357 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" containerName="nova-metadata-metadata" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.479365 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" containerName="nova-metadata-metadata" Feb 28 13:40:17 crc kubenswrapper[4897]: E0228 13:40:17.479382 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" containerName="nova-metadata-log" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.479443 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" containerName="nova-metadata-log" Feb 28 13:40:17 crc kubenswrapper[4897]: E0228 13:40:17.479460 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" containerName="dnsmasq-dns" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.479468 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" containerName="dnsmasq-dns" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.480142 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f83c96-ea10-4ba7-a711-5d87f1bf412e" containerName="dnsmasq-dns" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.480197 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="983b1a77-ab11-41df-b954-a8726742f9e5" containerName="nova-manage" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.480212 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" containerName="nova-metadata-metadata" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.480263 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" containerName="nova-metadata-log" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.482432 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.484589 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.485007 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.500770 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.607668 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zzmg\" (UniqueName: \"kubernetes.io/projected/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-kube-api-access-5zzmg\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.607744 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-logs\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.607805 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.607824 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.607885 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-config-data\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.710117 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.710479 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.710686 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-config-data\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.710841 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zzmg\" (UniqueName: \"kubernetes.io/projected/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-kube-api-access-5zzmg\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.710981 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-logs\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.711606 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-logs\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.716926 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.717795 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.727534 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-config-data\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.739224 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zzmg\" (UniqueName: \"kubernetes.io/projected/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-kube-api-access-5zzmg\") pod \"nova-metadata-0\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " pod="openstack/nova-metadata-0" Feb 28 13:40:17 crc kubenswrapper[4897]: I0228 13:40:17.799933 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:40:18 crc kubenswrapper[4897]: W0228 13:40:18.349738 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95a7b71f_427b_4f9d_97eb_af2ebd6f2c4d.slice/crio-c822eb8307ea44333c6d83bd265045fb183a865bf568925f9767f4b495d35a58 WatchSource:0}: Error finding container c822eb8307ea44333c6d83bd265045fb183a865bf568925f9767f4b495d35a58: Status 404 returned error can't find the container with id c822eb8307ea44333c6d83bd265045fb183a865bf568925f9767f4b495d35a58 Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.354781 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.388887 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d","Type":"ContainerStarted","Data":"c822eb8307ea44333c6d83bd265045fb183a865bf568925f9767f4b495d35a58"} Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.494273 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="034f5cfe-3216-44b1-9036-078917ab59bb" path="/var/lib/kubelet/pods/034f5cfe-3216-44b1-9036-078917ab59bb/volumes" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.737901 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.840525 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-config-data\") pod \"8c685d21-3cda-45f7-8486-5bb236b5eb43\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.840627 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-combined-ca-bundle\") pod \"8c685d21-3cda-45f7-8486-5bb236b5eb43\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.840855 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-scripts\") pod \"8c685d21-3cda-45f7-8486-5bb236b5eb43\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.841011 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k6gx\" (UniqueName: \"kubernetes.io/projected/8c685d21-3cda-45f7-8486-5bb236b5eb43-kube-api-access-9k6gx\") pod \"8c685d21-3cda-45f7-8486-5bb236b5eb43\" (UID: \"8c685d21-3cda-45f7-8486-5bb236b5eb43\") " Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.844688 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-scripts" (OuterVolumeSpecName: "scripts") pod "8c685d21-3cda-45f7-8486-5bb236b5eb43" (UID: "8c685d21-3cda-45f7-8486-5bb236b5eb43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.849705 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c685d21-3cda-45f7-8486-5bb236b5eb43-kube-api-access-9k6gx" (OuterVolumeSpecName: "kube-api-access-9k6gx") pod "8c685d21-3cda-45f7-8486-5bb236b5eb43" (UID: "8c685d21-3cda-45f7-8486-5bb236b5eb43"). InnerVolumeSpecName "kube-api-access-9k6gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.867269 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c685d21-3cda-45f7-8486-5bb236b5eb43" (UID: "8c685d21-3cda-45f7-8486-5bb236b5eb43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.875507 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-config-data" (OuterVolumeSpecName: "config-data") pod "8c685d21-3cda-45f7-8486-5bb236b5eb43" (UID: "8c685d21-3cda-45f7-8486-5bb236b5eb43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.942816 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.942844 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.942855 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c685d21-3cda-45f7-8486-5bb236b5eb43-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:18 crc kubenswrapper[4897]: I0228 13:40:18.942864 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k6gx\" (UniqueName: \"kubernetes.io/projected/8c685d21-3cda-45f7-8486-5bb236b5eb43-kube-api-access-9k6gx\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.400412 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d","Type":"ContainerStarted","Data":"a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4"} Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.400466 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d","Type":"ContainerStarted","Data":"74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593"} Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.402239 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rctbp" event={"ID":"8c685d21-3cda-45f7-8486-5bb236b5eb43","Type":"ContainerDied","Data":"a1a7c790e3f3d0123610eb071ce4ff3e279ba048b5ff74193d1755c6eab0d7ef"} Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.402603 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a7c790e3f3d0123610eb071ce4ff3e279ba048b5ff74193d1755c6eab0d7ef" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.402277 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rctbp" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.434013 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.433986823 podStartE2EDuration="2.433986823s" podCreationTimestamp="2026-02-28 13:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:19.425754637 +0000 UTC m=+1433.668075304" watchObservedRunningTime="2026-02-28 13:40:19.433986823 +0000 UTC m=+1433.676307480" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.527426 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 13:40:19 crc kubenswrapper[4897]: E0228 13:40:19.527847 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c685d21-3cda-45f7-8486-5bb236b5eb43" containerName="nova-cell1-conductor-db-sync" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.527862 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c685d21-3cda-45f7-8486-5bb236b5eb43" containerName="nova-cell1-conductor-db-sync" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.528045 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c685d21-3cda-45f7-8486-5bb236b5eb43" containerName="nova-cell1-conductor-db-sync" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.528671 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.533978 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.545064 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.657690 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8v2z\" (UniqueName: \"kubernetes.io/projected/6f3fc432-044c-4be6-b1b3-049e2d2842d5-kube-api-access-l8v2z\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.657848 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3fc432-044c-4be6-b1b3-049e2d2842d5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.657917 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3fc432-044c-4be6-b1b3-049e2d2842d5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.759974 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8v2z\" (UniqueName: \"kubernetes.io/projected/6f3fc432-044c-4be6-b1b3-049e2d2842d5-kube-api-access-l8v2z\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.760089 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3fc432-044c-4be6-b1b3-049e2d2842d5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.760186 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3fc432-044c-4be6-b1b3-049e2d2842d5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.765530 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3fc432-044c-4be6-b1b3-049e2d2842d5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.766938 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3fc432-044c-4be6-b1b3-049e2d2842d5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.774766 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8v2z\" (UniqueName: \"kubernetes.io/projected/6f3fc432-044c-4be6-b1b3-049e2d2842d5-kube-api-access-l8v2z\") pod \"nova-cell1-conductor-0\" (UID: \"6f3fc432-044c-4be6-b1b3-049e2d2842d5\") " pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:19 crc kubenswrapper[4897]: I0228 13:40:19.854222 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:20 crc kubenswrapper[4897]: E0228 13:40:19.987710 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 13:40:20 crc kubenswrapper[4897]: E0228 13:40:19.989618 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 13:40:20 crc kubenswrapper[4897]: E0228 13:40:19.990783 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 13:40:20 crc kubenswrapper[4897]: E0228 13:40:19.990820 4897 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e8e56c81-55af-4ad1-95d8-06dc87adf02b" containerName="nova-scheduler-scheduler" Feb 28 13:40:20 crc kubenswrapper[4897]: I0228 13:40:20.954276 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.061655 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 13:40:21 crc kubenswrapper[4897]: W0228 13:40:21.067409 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f3fc432_044c_4be6_b1b3_049e2d2842d5.slice/crio-eecd91ad860c21649ec0aa0ff874df017dbe4d86f1cfdbbd10c13b25fa85ebc0 WatchSource:0}: Error finding container eecd91ad860c21649ec0aa0ff874df017dbe4d86f1cfdbbd10c13b25fa85ebc0: Status 404 returned error can't find the container with id eecd91ad860c21649ec0aa0ff874df017dbe4d86f1cfdbbd10c13b25fa85ebc0 Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.088807 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-combined-ca-bundle\") pod \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.088965 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5jmq\" (UniqueName: \"kubernetes.io/projected/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-kube-api-access-x5jmq\") pod \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.089076 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data\") pod \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.089179 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-logs\") pod \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.089993 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-logs" (OuterVolumeSpecName: "logs") pod "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" (UID: "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.092044 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-kube-api-access-x5jmq" (OuterVolumeSpecName: "kube-api-access-x5jmq") pod "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" (UID: "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937"). InnerVolumeSpecName "kube-api-access-x5jmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:21 crc kubenswrapper[4897]: E0228 13:40:21.117542 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data podName:b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937 nodeName:}" failed. No retries permitted until 2026-02-28 13:40:21.61751514 +0000 UTC m=+1435.859835797 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data") pod "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" (UID: "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937") : error deleting /var/lib/kubelet/pods/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937/volume-subpaths: remove /var/lib/kubelet/pods/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937/volume-subpaths: no such file or directory Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.120518 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" (UID: "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.191823 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5jmq\" (UniqueName: \"kubernetes.io/projected/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-kube-api-access-x5jmq\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.191856 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.191867 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.425658 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6f3fc432-044c-4be6-b1b3-049e2d2842d5","Type":"ContainerStarted","Data":"e56c7e88faf80308506a4462d356d2bb855435e7061ab0af7a6925448e7ac5ad"} Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.425709 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6f3fc432-044c-4be6-b1b3-049e2d2842d5","Type":"ContainerStarted","Data":"eecd91ad860c21649ec0aa0ff874df017dbe4d86f1cfdbbd10c13b25fa85ebc0"} Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.425835 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.427683 4897 generic.go:334] "Generic (PLEG): container finished" podID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerID="569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002" exitCode=0 Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.427715 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937","Type":"ContainerDied","Data":"569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002"} Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.427738 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937","Type":"ContainerDied","Data":"3765cb7d82aec0207a918f1aa0a601b7f37f7c4ab19d6029b85f315b1217ce94"} Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.427755 4897 scope.go:117] "RemoveContainer" containerID="569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.427854 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.443274 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.44325085 podStartE2EDuration="2.44325085s" podCreationTimestamp="2026-02-28 13:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:21.439148682 +0000 UTC m=+1435.681469339" watchObservedRunningTime="2026-02-28 13:40:21.44325085 +0000 UTC m=+1435.685571527" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.455176 4897 scope.go:117] "RemoveContainer" containerID="b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003" Feb 28 13:40:21 crc kubenswrapper[4897]: E0228 13:40:21.457667 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.491019 4897 scope.go:117] "RemoveContainer" containerID="569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002" Feb 28 13:40:21 crc kubenswrapper[4897]: E0228 13:40:21.491730 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002\": container with ID starting with 569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002 not found: ID does not exist" containerID="569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.491770 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002"} err="failed to get container status \"569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002\": rpc error: code = NotFound desc = could not find container \"569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002\": container with ID starting with 569ce1c4bdd65d45dce2703331ad8ee9a669986814d2b816ba6ed3877c677002 not found: ID does not exist" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.491797 4897 scope.go:117] "RemoveContainer" containerID="b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003" Feb 28 13:40:21 crc kubenswrapper[4897]: E0228 13:40:21.492193 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003\": container with ID starting with b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003 not found: ID does not exist" containerID="b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.492234 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003"} err="failed to get container status \"b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003\": rpc error: code = NotFound desc = could not find container \"b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003\": container with ID starting with b6079d44a921f6bd12221fdab96b9592c4c6934c08425240fe335bb577452003 not found: ID does not exist" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.703245 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data\") pod \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\" (UID: \"b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937\") " Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.708446 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data" (OuterVolumeSpecName: "config-data") pod "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" (UID: "b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.805275 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.896964 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.908105 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.939921 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:21 crc kubenswrapper[4897]: E0228 13:40:21.940441 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-log" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.940457 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-log" Feb 28 13:40:21 crc kubenswrapper[4897]: E0228 13:40:21.940475 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-api" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.940484 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-api" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.940737 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-log" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.940768 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" containerName="nova-api-api" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.942609 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.946591 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 13:40:21 crc kubenswrapper[4897]: I0228 13:40:21.956770 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.008758 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.008933 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-config-data\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.008961 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527f565b-a9d5-457d-9319-f6bdf58a0afa-logs\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.009049 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdcm\" (UniqueName: \"kubernetes.io/projected/527f565b-a9d5-457d-9319-f6bdf58a0afa-kube-api-access-qcdcm\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.111095 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-config-data\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.111454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527f565b-a9d5-457d-9319-f6bdf58a0afa-logs\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.111550 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcdcm\" (UniqueName: \"kubernetes.io/projected/527f565b-a9d5-457d-9319-f6bdf58a0afa-kube-api-access-qcdcm\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.111605 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.111872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527f565b-a9d5-457d-9319-f6bdf58a0afa-logs\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.114747 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.114996 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-config-data\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.130557 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcdcm\" (UniqueName: \"kubernetes.io/projected/527f565b-a9d5-457d-9319-f6bdf58a0afa-kube-api-access-qcdcm\") pod \"nova-api-0\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.154819 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.213587 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmrtg\" (UniqueName: \"kubernetes.io/projected/e8e56c81-55af-4ad1-95d8-06dc87adf02b-kube-api-access-pmrtg\") pod \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.213685 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-config-data\") pod \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.213846 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-combined-ca-bundle\") pod \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\" (UID: \"e8e56c81-55af-4ad1-95d8-06dc87adf02b\") " Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.219248 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e56c81-55af-4ad1-95d8-06dc87adf02b-kube-api-access-pmrtg" (OuterVolumeSpecName: "kube-api-access-pmrtg") pod "e8e56c81-55af-4ad1-95d8-06dc87adf02b" (UID: "e8e56c81-55af-4ad1-95d8-06dc87adf02b"). InnerVolumeSpecName "kube-api-access-pmrtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.246394 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-config-data" (OuterVolumeSpecName: "config-data") pod "e8e56c81-55af-4ad1-95d8-06dc87adf02b" (UID: "e8e56c81-55af-4ad1-95d8-06dc87adf02b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.249443 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8e56c81-55af-4ad1-95d8-06dc87adf02b" (UID: "e8e56c81-55af-4ad1-95d8-06dc87adf02b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.272770 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.316672 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.316719 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmrtg\" (UniqueName: \"kubernetes.io/projected/e8e56c81-55af-4ad1-95d8-06dc87adf02b-kube-api-access-pmrtg\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.316743 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e56c81-55af-4ad1-95d8-06dc87adf02b-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.454922 4897 generic.go:334] "Generic (PLEG): container finished" podID="e8e56c81-55af-4ad1-95d8-06dc87adf02b" containerID="e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc" exitCode=0 Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.454979 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8e56c81-55af-4ad1-95d8-06dc87adf02b","Type":"ContainerDied","Data":"e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc"} Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.455004 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8e56c81-55af-4ad1-95d8-06dc87adf02b","Type":"ContainerDied","Data":"30bf1d56b4d9468dea36aca7f38d85cafc9d61dcb34fedcb2b926a873b886874"} Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.455020 4897 scope.go:117] "RemoveContainer" containerID="e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.455110 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.474136 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937" path="/var/lib/kubelet/pods/b4fa69ab-7c4a-4a63-a9ac-c7a13ab67937/volumes" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.563413 4897 scope.go:117] "RemoveContainer" containerID="e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc" Feb 28 13:40:22 crc kubenswrapper[4897]: E0228 13:40:22.566435 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc\": container with ID starting with e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc not found: ID does not exist" containerID="e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.566482 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc"} err="failed to get container status \"e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc\": rpc error: code = NotFound desc = could not find container \"e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc\": container with ID starting with e1435ecc3ad7ebdb6bb5eb9e3881add686114b47b6d993dd55ed51f4b6517ecc not found: ID does not exist" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.571295 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.610496 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.618520 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:22 crc kubenswrapper[4897]: E0228 13:40:22.619384 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e56c81-55af-4ad1-95d8-06dc87adf02b" containerName="nova-scheduler-scheduler" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.619406 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e56c81-55af-4ad1-95d8-06dc87adf02b" containerName="nova-scheduler-scheduler" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.620771 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e56c81-55af-4ad1-95d8-06dc87adf02b" containerName="nova-scheduler-scheduler" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.622879 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.624139 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.637779 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.725063 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdhhz\" (UniqueName: \"kubernetes.io/projected/e3be6c4a-2460-4245-94f8-36fcc969da66-kube-api-access-bdhhz\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.725980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-config-data\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.726256 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.800446 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.801350 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.817799 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.828130 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.828436 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdhhz\" (UniqueName: \"kubernetes.io/projected/e3be6c4a-2460-4245-94f8-36fcc969da66-kube-api-access-bdhhz\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.828580 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-config-data\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.833627 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.834187 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-config-data\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.844818 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdhhz\" (UniqueName: \"kubernetes.io/projected/e3be6c4a-2460-4245-94f8-36fcc969da66-kube-api-access-bdhhz\") pod \"nova-scheduler-0\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " pod="openstack/nova-scheduler-0" Feb 28 13:40:22 crc kubenswrapper[4897]: I0228 13:40:22.942871 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:40:23 crc kubenswrapper[4897]: I0228 13:40:23.449810 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:40:23 crc kubenswrapper[4897]: W0228 13:40:23.454899 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3be6c4a_2460_4245_94f8_36fcc969da66.slice/crio-dac4dd60dd0a7e52676dcd78f29ea12174161d9a3ee259493f167b748374d135 WatchSource:0}: Error finding container dac4dd60dd0a7e52676dcd78f29ea12174161d9a3ee259493f167b748374d135: Status 404 returned error can't find the container with id dac4dd60dd0a7e52676dcd78f29ea12174161d9a3ee259493f167b748374d135 Feb 28 13:40:23 crc kubenswrapper[4897]: E0228 13:40:23.459288 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:40:23 crc kubenswrapper[4897]: I0228 13:40:23.480613 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"527f565b-a9d5-457d-9319-f6bdf58a0afa","Type":"ContainerStarted","Data":"b1445d801029c649b351ec166cc086cdf4c22ca361a1b67962b7dfb4b6d956c1"} Feb 28 13:40:23 crc kubenswrapper[4897]: I0228 13:40:23.480662 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"527f565b-a9d5-457d-9319-f6bdf58a0afa","Type":"ContainerStarted","Data":"46cc59699b427e281eee38b4ccec650a4fea3fcb4eb36b39bdc1da06d5e2d78b"} Feb 28 13:40:23 crc kubenswrapper[4897]: I0228 13:40:23.480673 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"527f565b-a9d5-457d-9319-f6bdf58a0afa","Type":"ContainerStarted","Data":"1638858c752f8ef3e2dde7a558f9088cb93173b89bd494169db438ed6fb3f1ec"} Feb 28 13:40:23 crc kubenswrapper[4897]: I0228 13:40:23.494573 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e3be6c4a-2460-4245-94f8-36fcc969da66","Type":"ContainerStarted","Data":"dac4dd60dd0a7e52676dcd78f29ea12174161d9a3ee259493f167b748374d135"} Feb 28 13:40:23 crc kubenswrapper[4897]: I0228 13:40:23.553478 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.55346237 podStartE2EDuration="2.55346237s" podCreationTimestamp="2026-02-28 13:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:23.552211323 +0000 UTC m=+1437.794531980" watchObservedRunningTime="2026-02-28 13:40:23.55346237 +0000 UTC m=+1437.795783027" Feb 28 13:40:24 crc kubenswrapper[4897]: I0228 13:40:24.478147 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e56c81-55af-4ad1-95d8-06dc87adf02b" path="/var/lib/kubelet/pods/e8e56c81-55af-4ad1-95d8-06dc87adf02b/volumes" Feb 28 13:40:24 crc kubenswrapper[4897]: I0228 13:40:24.514656 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e3be6c4a-2460-4245-94f8-36fcc969da66","Type":"ContainerStarted","Data":"d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1"} Feb 28 13:40:24 crc kubenswrapper[4897]: I0228 13:40:24.545941 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.545911488 podStartE2EDuration="2.545911488s" podCreationTimestamp="2026-02-28 13:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:24.538032491 +0000 UTC m=+1438.780353188" watchObservedRunningTime="2026-02-28 13:40:24.545911488 +0000 UTC m=+1438.788232185" Feb 28 13:40:27 crc kubenswrapper[4897]: I0228 13:40:27.800337 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 13:40:27 crc kubenswrapper[4897]: I0228 13:40:27.800814 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 13:40:27 crc kubenswrapper[4897]: I0228 13:40:27.943756 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 13:40:28 crc kubenswrapper[4897]: I0228 13:40:28.814641 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 13:40:28 crc kubenswrapper[4897]: I0228 13:40:28.814641 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 13:40:29 crc kubenswrapper[4897]: E0228 13:40:29.461609 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:40:29 crc kubenswrapper[4897]: E0228 13:40:29.461592 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:40:29 crc kubenswrapper[4897]: I0228 13:40:29.889138 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 28 13:40:32 crc kubenswrapper[4897]: I0228 13:40:32.273600 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 13:40:32 crc kubenswrapper[4897]: I0228 13:40:32.273944 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 13:40:32 crc kubenswrapper[4897]: I0228 13:40:32.943678 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 13:40:32 crc kubenswrapper[4897]: I0228 13:40:32.976540 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 13:40:33 crc kubenswrapper[4897]: I0228 13:40:33.355474 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 13:40:33 crc kubenswrapper[4897]: I0228 13:40:33.355491 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 13:40:33 crc kubenswrapper[4897]: I0228 13:40:33.687136 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 13:40:36 crc kubenswrapper[4897]: E0228 13:40:36.472774 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:40:37 crc kubenswrapper[4897]: E0228 13:40:37.318348 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:40:37 crc kubenswrapper[4897]: E0228 13:40:37.318633 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qr7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(176f6ed0-e15f-4ee9-afd3-be29ff84d7dc): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:40:37 crc kubenswrapper[4897]: E0228 13:40:37.320242 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" Feb 28 13:40:37 crc kubenswrapper[4897]: I0228 13:40:37.814172 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 13:40:37 crc kubenswrapper[4897]: I0228 13:40:37.814283 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 13:40:37 crc kubenswrapper[4897]: I0228 13:40:37.837390 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 13:40:37 crc kubenswrapper[4897]: I0228 13:40:37.838266 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 13:40:40 crc kubenswrapper[4897]: E0228 13:40:40.464270 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.670826 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.723333 4897 generic.go:334] "Generic (PLEG): container finished" podID="622fb260-2971-4b10-b1f4-b52bfd89de49" containerID="3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0" exitCode=137 Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.723371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"622fb260-2971-4b10-b1f4-b52bfd89de49","Type":"ContainerDied","Data":"3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0"} Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.723395 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"622fb260-2971-4b10-b1f4-b52bfd89de49","Type":"ContainerDied","Data":"fded66dd44be27eac5f0ec6dfe40289c0a91c3afe59e3901e464b1f6f13086ce"} Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.723412 4897 scope.go:117] "RemoveContainer" containerID="3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.723546 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.733573 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-config-data\") pod \"622fb260-2971-4b10-b1f4-b52bfd89de49\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.733877 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq7r4\" (UniqueName: \"kubernetes.io/projected/622fb260-2971-4b10-b1f4-b52bfd89de49-kube-api-access-sq7r4\") pod \"622fb260-2971-4b10-b1f4-b52bfd89de49\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.734009 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-combined-ca-bundle\") pod \"622fb260-2971-4b10-b1f4-b52bfd89de49\" (UID: \"622fb260-2971-4b10-b1f4-b52bfd89de49\") " Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.742590 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622fb260-2971-4b10-b1f4-b52bfd89de49-kube-api-access-sq7r4" (OuterVolumeSpecName: "kube-api-access-sq7r4") pod "622fb260-2971-4b10-b1f4-b52bfd89de49" (UID: "622fb260-2971-4b10-b1f4-b52bfd89de49"). InnerVolumeSpecName "kube-api-access-sq7r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.753657 4897 scope.go:117] "RemoveContainer" containerID="3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0" Feb 28 13:40:40 crc kubenswrapper[4897]: E0228 13:40:40.756273 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0\": container with ID starting with 3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0 not found: ID does not exist" containerID="3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.756341 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0"} err="failed to get container status \"3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0\": rpc error: code = NotFound desc = could not find container \"3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0\": container with ID starting with 3c4f32645d3f9c711ce880e076685a970a08ca2618e9f49eccc66b451cbbc6b0 not found: ID does not exist" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.764998 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-config-data" (OuterVolumeSpecName: "config-data") pod "622fb260-2971-4b10-b1f4-b52bfd89de49" (UID: "622fb260-2971-4b10-b1f4-b52bfd89de49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.776075 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "622fb260-2971-4b10-b1f4-b52bfd89de49" (UID: "622fb260-2971-4b10-b1f4-b52bfd89de49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.836515 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq7r4\" (UniqueName: \"kubernetes.io/projected/622fb260-2971-4b10-b1f4-b52bfd89de49-kube-api-access-sq7r4\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.836565 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:40 crc kubenswrapper[4897]: I0228 13:40:40.836575 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622fb260-2971-4b10-b1f4-b52bfd89de49-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.065082 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.077995 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.097812 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:41 crc kubenswrapper[4897]: E0228 13:40:41.098291 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622fb260-2971-4b10-b1f4-b52bfd89de49" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.098325 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="622fb260-2971-4b10-b1f4-b52bfd89de49" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.098532 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="622fb260-2971-4b10-b1f4-b52bfd89de49" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.099323 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.101164 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.102028 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.104190 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.130507 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.243919 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.243982 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.244087 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.244173 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbw97\" (UniqueName: \"kubernetes.io/projected/b200a830-20fd-475c-bf9f-7c17ae963355-kube-api-access-wbw97\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.244302 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.346889 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbw97\" (UniqueName: \"kubernetes.io/projected/b200a830-20fd-475c-bf9f-7c17ae963355-kube-api-access-wbw97\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.347040 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.347266 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.347352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.347408 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.351756 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.351783 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.352275 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.353827 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b200a830-20fd-475c-bf9f-7c17ae963355-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: E0228 13:40:41.361756 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:40:41 crc kubenswrapper[4897]: E0228 13:40:41.361915 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:40:41 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:40:41 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpqzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538100-v4j6s_openshift-infra(b6642318-7bfd-49f2-86e3-0fe4a7ec2709): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:40:41 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:40:41 crc kubenswrapper[4897]: E0228 13:40:41.363189 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.365285 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbw97\" (UniqueName: \"kubernetes.io/projected/b200a830-20fd-475c-bf9f-7c17ae963355-kube-api-access-wbw97\") pod \"nova-cell1-novncproxy-0\" (UID: \"b200a830-20fd-475c-bf9f-7c17ae963355\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.436025 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:41 crc kubenswrapper[4897]: I0228 13:40:41.923214 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.281061 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.282116 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.285496 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.291607 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.474112 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622fb260-2971-4b10-b1f4-b52bfd89de49" path="/var/lib/kubelet/pods/622fb260-2971-4b10-b1f4-b52bfd89de49/volumes" Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.745727 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b200a830-20fd-475c-bf9f-7c17ae963355","Type":"ContainerStarted","Data":"888b4e4229cf8e7a1cfe1cfb3549ce4761e532f74f2a56c8d1cfb7d55a868be6"} Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.746084 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b200a830-20fd-475c-bf9f-7c17ae963355","Type":"ContainerStarted","Data":"45799cd3206062d17df512492477418ffebda62afa769362aa4af8589f5694bd"} Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.746486 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.767879 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 13:40:42 crc kubenswrapper[4897]: I0228 13:40:42.776828 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.7767992910000001 podStartE2EDuration="1.776799291s" podCreationTimestamp="2026-02-28 13:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:42.770197631 +0000 UTC m=+1457.012518298" watchObservedRunningTime="2026-02-28 13:40:42.776799291 +0000 UTC m=+1457.019119948" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.021600 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-596dcdd889-4frbq"] Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.023740 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.046126 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-596dcdd889-4frbq"] Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.092418 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849kb\" (UniqueName: \"kubernetes.io/projected/83003cdb-d775-4878-97e7-453c0a1f2ae5-kube-api-access-849kb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.092523 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-config\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.092587 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-swift-storage-0\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.092626 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-svc\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.092661 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-sb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.092690 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-nb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.194349 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-svc\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.194564 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-sb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.194654 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-nb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.194945 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-849kb\" (UniqueName: \"kubernetes.io/projected/83003cdb-d775-4878-97e7-453c0a1f2ae5-kube-api-access-849kb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.195869 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-config\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.195768 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-nb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.195667 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-sb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.196740 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-svc\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.196859 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-swift-storage-0\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.197144 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-config\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.197510 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-swift-storage-0\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.217295 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-849kb\" (UniqueName: \"kubernetes.io/projected/83003cdb-d775-4878-97e7-453c0a1f2ae5-kube-api-access-849kb\") pod \"dnsmasq-dns-596dcdd889-4frbq\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.357427 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:43 crc kubenswrapper[4897]: W0228 13:40:43.824416 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83003cdb_d775_4878_97e7_453c0a1f2ae5.slice/crio-c573087abf674e24122467d3453d444f224898b935eab83d92f9fc5f3663573c WatchSource:0}: Error finding container c573087abf674e24122467d3453d444f224898b935eab83d92f9fc5f3663573c: Status 404 returned error can't find the container with id c573087abf674e24122467d3453d444f224898b935eab83d92f9fc5f3663573c Feb 28 13:40:43 crc kubenswrapper[4897]: I0228 13:40:43.831575 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-596dcdd889-4frbq"] Feb 28 13:40:44 crc kubenswrapper[4897]: I0228 13:40:44.764687 4897 generic.go:334] "Generic (PLEG): container finished" podID="83003cdb-d775-4878-97e7-453c0a1f2ae5" containerID="a63886b04d33ffa5a7d19c5ac97da96890acb97e7dfaee9e83ca38db369e8e9f" exitCode=0 Feb 28 13:40:44 crc kubenswrapper[4897]: I0228 13:40:44.764737 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" event={"ID":"83003cdb-d775-4878-97e7-453c0a1f2ae5","Type":"ContainerDied","Data":"a63886b04d33ffa5a7d19c5ac97da96890acb97e7dfaee9e83ca38db369e8e9f"} Feb 28 13:40:44 crc kubenswrapper[4897]: I0228 13:40:44.765047 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" event={"ID":"83003cdb-d775-4878-97e7-453c0a1f2ae5","Type":"ContainerStarted","Data":"c573087abf674e24122467d3453d444f224898b935eab83d92f9fc5f3663573c"} Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.124445 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.125016 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="sg-core" containerID="cri-o://bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3" gracePeriod=30 Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.125069 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="ceilometer-central-agent" containerID="cri-o://de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf" gracePeriod=30 Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.125336 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="ceilometer-notification-agent" containerID="cri-o://353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511" gracePeriod=30 Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.445649 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.779256 4897 generic.go:334] "Generic (PLEG): container finished" podID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerID="bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3" exitCode=2 Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.779285 4897 generic.go:334] "Generic (PLEG): container finished" podID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerID="de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf" exitCode=0 Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.779366 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc","Type":"ContainerDied","Data":"bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3"} Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.779437 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc","Type":"ContainerDied","Data":"de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf"} Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.782976 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" event={"ID":"83003cdb-d775-4878-97e7-453c0a1f2ae5","Type":"ContainerStarted","Data":"de6f36f06dbf82c91ba641a34e18c8dab09d82cbb4465f7b11b64ce8b3ff2c41"} Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.783217 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-log" containerID="cri-o://46cc59699b427e281eee38b4ccec650a4fea3fcb4eb36b39bdc1da06d5e2d78b" gracePeriod=30 Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.783266 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-api" containerID="cri-o://b1445d801029c649b351ec166cc086cdf4c22ca361a1b67962b7dfb4b6d956c1" gracePeriod=30 Feb 28 13:40:45 crc kubenswrapper[4897]: I0228 13:40:45.829063 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" podStartSLOduration=3.829032358 podStartE2EDuration="3.829032358s" podCreationTimestamp="2026-02-28 13:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:45.818526756 +0000 UTC m=+1460.060847453" watchObservedRunningTime="2026-02-28 13:40:45.829032358 +0000 UTC m=+1460.071353045" Feb 28 13:40:46 crc kubenswrapper[4897]: I0228 13:40:46.436489 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:46 crc kubenswrapper[4897]: I0228 13:40:46.804562 4897 generic.go:334] "Generic (PLEG): container finished" podID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerID="b1445d801029c649b351ec166cc086cdf4c22ca361a1b67962b7dfb4b6d956c1" exitCode=0 Feb 28 13:40:46 crc kubenswrapper[4897]: I0228 13:40:46.804591 4897 generic.go:334] "Generic (PLEG): container finished" podID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerID="46cc59699b427e281eee38b4ccec650a4fea3fcb4eb36b39bdc1da06d5e2d78b" exitCode=143 Feb 28 13:40:46 crc kubenswrapper[4897]: I0228 13:40:46.805111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"527f565b-a9d5-457d-9319-f6bdf58a0afa","Type":"ContainerDied","Data":"b1445d801029c649b351ec166cc086cdf4c22ca361a1b67962b7dfb4b6d956c1"} Feb 28 13:40:46 crc kubenswrapper[4897]: I0228 13:40:46.805171 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"527f565b-a9d5-457d-9319-f6bdf58a0afa","Type":"ContainerDied","Data":"46cc59699b427e281eee38b4ccec650a4fea3fcb4eb36b39bdc1da06d5e2d78b"} Feb 28 13:40:46 crc kubenswrapper[4897]: I0228 13:40:46.805288 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.122843 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.177134 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-config-data\") pod \"527f565b-a9d5-457d-9319-f6bdf58a0afa\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.177285 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcdcm\" (UniqueName: \"kubernetes.io/projected/527f565b-a9d5-457d-9319-f6bdf58a0afa-kube-api-access-qcdcm\") pod \"527f565b-a9d5-457d-9319-f6bdf58a0afa\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.177455 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527f565b-a9d5-457d-9319-f6bdf58a0afa-logs\") pod \"527f565b-a9d5-457d-9319-f6bdf58a0afa\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.177517 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-combined-ca-bundle\") pod \"527f565b-a9d5-457d-9319-f6bdf58a0afa\" (UID: \"527f565b-a9d5-457d-9319-f6bdf58a0afa\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.178289 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527f565b-a9d5-457d-9319-f6bdf58a0afa-logs" (OuterVolumeSpecName: "logs") pod "527f565b-a9d5-457d-9319-f6bdf58a0afa" (UID: "527f565b-a9d5-457d-9319-f6bdf58a0afa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.183411 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527f565b-a9d5-457d-9319-f6bdf58a0afa-kube-api-access-qcdcm" (OuterVolumeSpecName: "kube-api-access-qcdcm") pod "527f565b-a9d5-457d-9319-f6bdf58a0afa" (UID: "527f565b-a9d5-457d-9319-f6bdf58a0afa"). InnerVolumeSpecName "kube-api-access-qcdcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.212451 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "527f565b-a9d5-457d-9319-f6bdf58a0afa" (UID: "527f565b-a9d5-457d-9319-f6bdf58a0afa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.233636 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-config-data" (OuterVolumeSpecName: "config-data") pod "527f565b-a9d5-457d-9319-f6bdf58a0afa" (UID: "527f565b-a9d5-457d-9319-f6bdf58a0afa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.279583 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.279616 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcdcm\" (UniqueName: \"kubernetes.io/projected/527f565b-a9d5-457d-9319-f6bdf58a0afa-kube-api-access-qcdcm\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.279627 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/527f565b-a9d5-457d-9319-f6bdf58a0afa-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.279638 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527f565b-a9d5-457d-9319-f6bdf58a0afa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: E0228 13:40:47.465505 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.784412 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.851276 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"527f565b-a9d5-457d-9319-f6bdf58a0afa","Type":"ContainerDied","Data":"1638858c752f8ef3e2dde7a558f9088cb93173b89bd494169db438ed6fb3f1ec"} Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.851361 4897 scope.go:117] "RemoveContainer" containerID="b1445d801029c649b351ec166cc086cdf4c22ca361a1b67962b7dfb4b6d956c1" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.851370 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.859142 4897 generic.go:334] "Generic (PLEG): container finished" podID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerID="353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511" exitCode=0 Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.859369 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc","Type":"ContainerDied","Data":"353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511"} Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.859491 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc","Type":"ContainerDied","Data":"3908b7e0be9188ffd1cc2567eb66f477eccbbf4aa6e386052ff27f8da5b1faa5"} Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.859398 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.891497 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qr7w\" (UniqueName: \"kubernetes.io/projected/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-kube-api-access-9qr7w\") pod \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.891759 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-log-httpd\") pod \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.891847 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-scripts\") pod \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.892427 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" (UID: "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.894691 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-combined-ca-bundle\") pod \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.894889 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-sg-core-conf-yaml\") pod \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.894936 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-config-data\") pod \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.895068 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-run-httpd\") pod \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\" (UID: \"176f6ed0-e15f-4ee9-afd3-be29ff84d7dc\") " Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.896518 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" (UID: "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.896845 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-scripts" (OuterVolumeSpecName: "scripts") pod "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" (UID: "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.897089 4897 scope.go:117] "RemoveContainer" containerID="46cc59699b427e281eee38b4ccec650a4fea3fcb4eb36b39bdc1da06d5e2d78b" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.897180 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.897205 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.897218 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.897182 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-kube-api-access-9qr7w" (OuterVolumeSpecName: "kube-api-access-9qr7w") pod "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" (UID: "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc"). InnerVolumeSpecName "kube-api-access-9qr7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.910766 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.924531 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.935363 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:47 crc kubenswrapper[4897]: E0228 13:40:47.935872 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="ceilometer-notification-agent" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.935893 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="ceilometer-notification-agent" Feb 28 13:40:47 crc kubenswrapper[4897]: E0228 13:40:47.935909 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="ceilometer-central-agent" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.935916 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="ceilometer-central-agent" Feb 28 13:40:47 crc kubenswrapper[4897]: E0228 13:40:47.935937 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-log" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.935944 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-log" Feb 28 13:40:47 crc kubenswrapper[4897]: E0228 13:40:47.935964 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="sg-core" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.935970 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="sg-core" Feb 28 13:40:47 crc kubenswrapper[4897]: E0228 13:40:47.935978 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-api" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.935984 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-api" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.936161 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-log" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.936179 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="sg-core" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.936188 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" containerName="nova-api-api" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.936199 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="ceilometer-central-agent" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.936210 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" containerName="ceilometer-notification-agent" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.937284 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.940871 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.941062 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.941170 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.943949 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.959520 4897 scope.go:117] "RemoveContainer" containerID="bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.968628 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" (UID: "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.979171 4897 scope.go:117] "RemoveContainer" containerID="353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.979192 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-config-data" (OuterVolumeSpecName: "config-data") pod "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" (UID: "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.987342 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" (UID: "176f6ed0-e15f-4ee9-afd3-be29ff84d7dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.997729 4897 scope.go:117] "RemoveContainer" containerID="de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998612 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-config-data\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998663 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-logs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998693 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c926g\" (UniqueName: \"kubernetes.io/projected/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-kube-api-access-c926g\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998732 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998748 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998763 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-public-tls-certs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998845 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qr7w\" (UniqueName: \"kubernetes.io/projected/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-kube-api-access-9qr7w\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998860 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998872 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:47 crc kubenswrapper[4897]: I0228 13:40:47.998884 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.089694 4897 scope.go:117] "RemoveContainer" containerID="bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3" Feb 28 13:40:48 crc kubenswrapper[4897]: E0228 13:40:48.090085 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3\": container with ID starting with bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3 not found: ID does not exist" containerID="bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.090120 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3"} err="failed to get container status \"bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3\": rpc error: code = NotFound desc = could not find container \"bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3\": container with ID starting with bcff9e3c54d1f6c11e4c711628ba58a457b1c66406e024344d716db4f6de6ae3 not found: ID does not exist" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.090141 4897 scope.go:117] "RemoveContainer" containerID="353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511" Feb 28 13:40:48 crc kubenswrapper[4897]: E0228 13:40:48.090412 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511\": container with ID starting with 353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511 not found: ID does not exist" containerID="353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.090450 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511"} err="failed to get container status \"353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511\": rpc error: code = NotFound desc = could not find container \"353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511\": container with ID starting with 353b8cb45983b0f194a2ae89674e4e883a55391ff465fd277caa3ed9a6c1f511 not found: ID does not exist" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.090529 4897 scope.go:117] "RemoveContainer" containerID="de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf" Feb 28 13:40:48 crc kubenswrapper[4897]: E0228 13:40:48.090836 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf\": container with ID starting with de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf not found: ID does not exist" containerID="de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.090885 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf"} err="failed to get container status \"de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf\": rpc error: code = NotFound desc = could not find container \"de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf\": container with ID starting with de25981277eb77894f93cb1ec9005d83919647fcf8a572bd0023559cacb16caf not found: ID does not exist" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.100702 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-config-data\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.100760 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-logs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.100796 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c926g\" (UniqueName: \"kubernetes.io/projected/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-kube-api-access-c926g\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.100828 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.100846 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.100861 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-public-tls-certs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.103004 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-logs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.104820 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-public-tls-certs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.106854 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.107196 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-config-data\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.117163 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.129781 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c926g\" (UniqueName: \"kubernetes.io/projected/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-kube-api-access-c926g\") pod \"nova-api-0\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.271363 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.283268 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.302551 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.313845 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.316761 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.318674 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.320346 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.327496 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.407424 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.407521 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-config-data\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.407595 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-run-httpd\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.407645 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hjth\" (UniqueName: \"kubernetes.io/projected/746916d9-ca42-480b-9aa7-7e1fe9803900-kube-api-access-7hjth\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.407698 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-scripts\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.407744 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-log-httpd\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.407768 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.471029 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176f6ed0-e15f-4ee9-afd3-be29ff84d7dc" path="/var/lib/kubelet/pods/176f6ed0-e15f-4ee9-afd3-be29ff84d7dc/volumes" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.472082 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527f565b-a9d5-457d-9319-f6bdf58a0afa" path="/var/lib/kubelet/pods/527f565b-a9d5-457d-9319-f6bdf58a0afa/volumes" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.509656 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-run-httpd\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.509724 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hjth\" (UniqueName: \"kubernetes.io/projected/746916d9-ca42-480b-9aa7-7e1fe9803900-kube-api-access-7hjth\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.509766 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-scripts\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.509801 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-log-httpd\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.509817 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.510291 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-run-httpd\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.510336 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-log-httpd\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.510821 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.510908 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-config-data\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.521246 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.521929 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.522426 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-scripts\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.527191 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-config-data\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.534263 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hjth\" (UniqueName: \"kubernetes.io/projected/746916d9-ca42-480b-9aa7-7e1fe9803900-kube-api-access-7hjth\") pod \"ceilometer-0\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.761035 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.821640 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:40:48 crc kubenswrapper[4897]: W0228 13:40:48.827237 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd00e0fcd_cd38_4c8e_a7d8_ef557c4f9779.slice/crio-e3f0177d48ee07cea19b8c6abc36c51cf36729a410e12ef5a13366316ac7da26 WatchSource:0}: Error finding container e3f0177d48ee07cea19b8c6abc36c51cf36729a410e12ef5a13366316ac7da26: Status 404 returned error can't find the container with id e3f0177d48ee07cea19b8c6abc36c51cf36729a410e12ef5a13366316ac7da26 Feb 28 13:40:48 crc kubenswrapper[4897]: I0228 13:40:48.889280 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779","Type":"ContainerStarted","Data":"e3f0177d48ee07cea19b8c6abc36c51cf36729a410e12ef5a13366316ac7da26"} Feb 28 13:40:49 crc kubenswrapper[4897]: I0228 13:40:49.251748 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:40:49 crc kubenswrapper[4897]: I0228 13:40:49.908553 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779","Type":"ContainerStarted","Data":"1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb"} Feb 28 13:40:49 crc kubenswrapper[4897]: I0228 13:40:49.908830 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779","Type":"ContainerStarted","Data":"8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244"} Feb 28 13:40:49 crc kubenswrapper[4897]: I0228 13:40:49.914256 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerStarted","Data":"c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5"} Feb 28 13:40:49 crc kubenswrapper[4897]: I0228 13:40:49.914290 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerStarted","Data":"5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823"} Feb 28 13:40:49 crc kubenswrapper[4897]: I0228 13:40:49.914299 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerStarted","Data":"08843746f6de77de835512e0a0ff0caea3fa279a1e86745f2f2d71c39346fc01"} Feb 28 13:40:49 crc kubenswrapper[4897]: I0228 13:40:49.932898 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.932879006 podStartE2EDuration="2.932879006s" podCreationTimestamp="2026-02-28 13:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:49.925099302 +0000 UTC m=+1464.167419959" watchObservedRunningTime="2026-02-28 13:40:49.932879006 +0000 UTC m=+1464.175199663" Feb 28 13:40:50 crc kubenswrapper[4897]: I0228 13:40:50.927985 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerStarted","Data":"2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0"} Feb 28 13:40:51 crc kubenswrapper[4897]: I0228 13:40:51.437007 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:51 crc kubenswrapper[4897]: I0228 13:40:51.465570 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:51 crc kubenswrapper[4897]: I0228 13:40:51.961400 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.123084 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-g6qqk"] Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.124819 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.128631 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.129111 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.139029 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-g6qqk"] Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.230563 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-config-data\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.230910 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-scripts\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.231060 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scdpv\" (UniqueName: \"kubernetes.io/projected/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-kube-api-access-scdpv\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.231404 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.333006 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.333277 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-config-data\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.333397 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-scripts\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.333480 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scdpv\" (UniqueName: \"kubernetes.io/projected/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-kube-api-access-scdpv\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.340091 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-scripts\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.340505 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.350424 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-config-data\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.353710 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scdpv\" (UniqueName: \"kubernetes.io/projected/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-kube-api-access-scdpv\") pod \"nova-cell1-cell-mapping-g6qqk\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: E0228 13:40:52.384249 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:40:52 crc kubenswrapper[4897]: E0228 13:40:52.384948 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hjth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(746916d9-ca42-480b-9aa7-7e1fe9803900): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:40:52 crc kubenswrapper[4897]: E0228 13:40:52.386128 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.448553 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:40:52 crc kubenswrapper[4897]: E0228 13:40:52.463226 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:40:52 crc kubenswrapper[4897]: E0228 13:40:52.945916 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:40:52 crc kubenswrapper[4897]: I0228 13:40:52.969537 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-g6qqk"] Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.360418 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.435660 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cbd9f89f7-sx96f"] Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.435940 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" podUID="e061011e-e58b-458e-aba8-8e0ace759117" containerName="dnsmasq-dns" containerID="cri-o://c3632e4a3c7ef8eeab10572c630804218648e3d70abb15feafefdbeecc990345" gracePeriod=10 Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.952434 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g6qqk" event={"ID":"0ccafbfb-14c3-4f61-8fb4-adf29f725d61","Type":"ContainerStarted","Data":"832f64825502bec11dc1b6cff6d5ee2817b062b1ed1a6c29b27578868a79bca7"} Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.952947 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g6qqk" event={"ID":"0ccafbfb-14c3-4f61-8fb4-adf29f725d61","Type":"ContainerStarted","Data":"9af4dfae22dffe6ce6d0f562d40449323bc649f146caac8daf98ec4cb2baf92d"} Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.954044 4897 generic.go:334] "Generic (PLEG): container finished" podID="e061011e-e58b-458e-aba8-8e0ace759117" containerID="c3632e4a3c7ef8eeab10572c630804218648e3d70abb15feafefdbeecc990345" exitCode=0 Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.954132 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" event={"ID":"e061011e-e58b-458e-aba8-8e0ace759117","Type":"ContainerDied","Data":"c3632e4a3c7ef8eeab10572c630804218648e3d70abb15feafefdbeecc990345"} Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.954181 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" event={"ID":"e061011e-e58b-458e-aba8-8e0ace759117","Type":"ContainerDied","Data":"15ddc7ad9e5ff398dc46f297097503c571a504c286b229ab1286224115c31d32"} Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.954195 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ddc7ad9e5ff398dc46f297097503c571a504c286b229ab1286224115c31d32" Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.957211 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:53 crc kubenswrapper[4897]: I0228 13:40:53.975721 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-g6qqk" podStartSLOduration=1.975702348 podStartE2EDuration="1.975702348s" podCreationTimestamp="2026-02-28 13:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:40:53.971852197 +0000 UTC m=+1468.214172874" watchObservedRunningTime="2026-02-28 13:40:53.975702348 +0000 UTC m=+1468.218023005" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.073429 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsv68\" (UniqueName: \"kubernetes.io/projected/e061011e-e58b-458e-aba8-8e0ace759117-kube-api-access-fsv68\") pod \"e061011e-e58b-458e-aba8-8e0ace759117\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.073473 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-sb\") pod \"e061011e-e58b-458e-aba8-8e0ace759117\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.073633 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-svc\") pod \"e061011e-e58b-458e-aba8-8e0ace759117\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.073674 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-swift-storage-0\") pod \"e061011e-e58b-458e-aba8-8e0ace759117\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.073762 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-nb\") pod \"e061011e-e58b-458e-aba8-8e0ace759117\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.073815 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-config\") pod \"e061011e-e58b-458e-aba8-8e0ace759117\" (UID: \"e061011e-e58b-458e-aba8-8e0ace759117\") " Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.083540 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e061011e-e58b-458e-aba8-8e0ace759117-kube-api-access-fsv68" (OuterVolumeSpecName: "kube-api-access-fsv68") pod "e061011e-e58b-458e-aba8-8e0ace759117" (UID: "e061011e-e58b-458e-aba8-8e0ace759117"). InnerVolumeSpecName "kube-api-access-fsv68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.132190 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e061011e-e58b-458e-aba8-8e0ace759117" (UID: "e061011e-e58b-458e-aba8-8e0ace759117"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.136177 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e061011e-e58b-458e-aba8-8e0ace759117" (UID: "e061011e-e58b-458e-aba8-8e0ace759117"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.137431 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e061011e-e58b-458e-aba8-8e0ace759117" (UID: "e061011e-e58b-458e-aba8-8e0ace759117"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.140715 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e061011e-e58b-458e-aba8-8e0ace759117" (UID: "e061011e-e58b-458e-aba8-8e0ace759117"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.144402 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-config" (OuterVolumeSpecName: "config") pod "e061011e-e58b-458e-aba8-8e0ace759117" (UID: "e061011e-e58b-458e-aba8-8e0ace759117"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.176822 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.176857 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsv68\" (UniqueName: \"kubernetes.io/projected/e061011e-e58b-458e-aba8-8e0ace759117-kube-api-access-fsv68\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.176869 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.176878 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.176887 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.176895 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e061011e-e58b-458e-aba8-8e0ace759117-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:40:54 crc kubenswrapper[4897]: E0228 13:40:54.458418 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.965036 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbd9f89f7-sx96f" Feb 28 13:40:54 crc kubenswrapper[4897]: I0228 13:40:54.996273 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cbd9f89f7-sx96f"] Feb 28 13:40:55 crc kubenswrapper[4897]: I0228 13:40:55.010907 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cbd9f89f7-sx96f"] Feb 28 13:40:56 crc kubenswrapper[4897]: I0228 13:40:56.471970 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e061011e-e58b-458e-aba8-8e0ace759117" path="/var/lib/kubelet/pods/e061011e-e58b-458e-aba8-8e0ace759117/volumes" Feb 28 13:40:58 crc kubenswrapper[4897]: I0228 13:40:58.272171 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 13:40:58 crc kubenswrapper[4897]: I0228 13:40:58.272525 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 13:40:59 crc kubenswrapper[4897]: I0228 13:40:59.020458 4897 generic.go:334] "Generic (PLEG): container finished" podID="0ccafbfb-14c3-4f61-8fb4-adf29f725d61" containerID="832f64825502bec11dc1b6cff6d5ee2817b062b1ed1a6c29b27578868a79bca7" exitCode=0 Feb 28 13:40:59 crc kubenswrapper[4897]: I0228 13:40:59.020543 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g6qqk" event={"ID":"0ccafbfb-14c3-4f61-8fb4-adf29f725d61","Type":"ContainerDied","Data":"832f64825502bec11dc1b6cff6d5ee2817b062b1ed1a6c29b27578868a79bca7"} Feb 28 13:40:59 crc kubenswrapper[4897]: I0228 13:40:59.284471 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 13:40:59 crc kubenswrapper[4897]: I0228 13:40:59.284493 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.455461 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:41:00 crc kubenswrapper[4897]: E0228 13:41:00.461996 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.526344 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scdpv\" (UniqueName: \"kubernetes.io/projected/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-kube-api-access-scdpv\") pod \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.526405 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-scripts\") pod \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.526485 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-combined-ca-bundle\") pod \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.526580 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-config-data\") pod \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\" (UID: \"0ccafbfb-14c3-4f61-8fb4-adf29f725d61\") " Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.546458 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-kube-api-access-scdpv" (OuterVolumeSpecName: "kube-api-access-scdpv") pod "0ccafbfb-14c3-4f61-8fb4-adf29f725d61" (UID: "0ccafbfb-14c3-4f61-8fb4-adf29f725d61"). InnerVolumeSpecName "kube-api-access-scdpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.549766 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-scripts" (OuterVolumeSpecName: "scripts") pod "0ccafbfb-14c3-4f61-8fb4-adf29f725d61" (UID: "0ccafbfb-14c3-4f61-8fb4-adf29f725d61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.567497 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-config-data" (OuterVolumeSpecName: "config-data") pod "0ccafbfb-14c3-4f61-8fb4-adf29f725d61" (UID: "0ccafbfb-14c3-4f61-8fb4-adf29f725d61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.570475 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ccafbfb-14c3-4f61-8fb4-adf29f725d61" (UID: "0ccafbfb-14c3-4f61-8fb4-adf29f725d61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.628426 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.628472 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scdpv\" (UniqueName: \"kubernetes.io/projected/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-kube-api-access-scdpv\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.628486 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:00 crc kubenswrapper[4897]: I0228 13:41:00.628498 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccafbfb-14c3-4f61-8fb4-adf29f725d61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.049876 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-g6qqk" event={"ID":"0ccafbfb-14c3-4f61-8fb4-adf29f725d61","Type":"ContainerDied","Data":"9af4dfae22dffe6ce6d0f562d40449323bc649f146caac8daf98ec4cb2baf92d"} Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.050192 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af4dfae22dffe6ce6d0f562d40449323bc649f146caac8daf98ec4cb2baf92d" Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.050008 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-g6qqk" Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.231645 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.231917 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e3be6c4a-2460-4245-94f8-36fcc969da66" containerName="nova-scheduler-scheduler" containerID="cri-o://d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1" gracePeriod=30 Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.249395 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.249724 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-log" containerID="cri-o://8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244" gracePeriod=30 Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.249808 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-api" containerID="cri-o://1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb" gracePeriod=30 Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.262849 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.263128 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-log" containerID="cri-o://74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593" gracePeriod=30 Feb 28 13:41:01 crc kubenswrapper[4897]: I0228 13:41:01.263252 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-metadata" containerID="cri-o://a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4" gracePeriod=30 Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.064183 4897 generic.go:334] "Generic (PLEG): container finished" podID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerID="8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244" exitCode=143 Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.064277 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779","Type":"ContainerDied","Data":"8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244"} Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.066215 4897 generic.go:334] "Generic (PLEG): container finished" podID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerID="74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593" exitCode=143 Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.066258 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d","Type":"ContainerDied","Data":"74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593"} Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.689778 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.697151 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.767146 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-combined-ca-bundle\") pod \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.767335 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-config-data\") pod \"e3be6c4a-2460-4245-94f8-36fcc969da66\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.767375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-combined-ca-bundle\") pod \"e3be6c4a-2460-4245-94f8-36fcc969da66\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.767447 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-config-data\") pod \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.767489 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zzmg\" (UniqueName: \"kubernetes.io/projected/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-kube-api-access-5zzmg\") pod \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.767540 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-nova-metadata-tls-certs\") pod \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.767559 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdhhz\" (UniqueName: \"kubernetes.io/projected/e3be6c4a-2460-4245-94f8-36fcc969da66-kube-api-access-bdhhz\") pod \"e3be6c4a-2460-4245-94f8-36fcc969da66\" (UID: \"e3be6c4a-2460-4245-94f8-36fcc969da66\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.767595 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-logs\") pod \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\" (UID: \"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.768303 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-logs" (OuterVolumeSpecName: "logs") pod "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" (UID: "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.772266 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3be6c4a-2460-4245-94f8-36fcc969da66-kube-api-access-bdhhz" (OuterVolumeSpecName: "kube-api-access-bdhhz") pod "e3be6c4a-2460-4245-94f8-36fcc969da66" (UID: "e3be6c4a-2460-4245-94f8-36fcc969da66"). InnerVolumeSpecName "kube-api-access-bdhhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.778949 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-kube-api-access-5zzmg" (OuterVolumeSpecName: "kube-api-access-5zzmg") pod "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" (UID: "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d"). InnerVolumeSpecName "kube-api-access-5zzmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.798096 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-config-data" (OuterVolumeSpecName: "config-data") pod "e3be6c4a-2460-4245-94f8-36fcc969da66" (UID: "e3be6c4a-2460-4245-94f8-36fcc969da66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.799753 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3be6c4a-2460-4245-94f8-36fcc969da66" (UID: "e3be6c4a-2460-4245-94f8-36fcc969da66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.807249 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" (UID: "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.811046 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-config-data" (OuterVolumeSpecName: "config-data") pod "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" (UID: "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.846178 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" (UID: "95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.854409 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.869674 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.869888 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3be6c4a-2460-4245-94f8-36fcc969da66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.869904 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.869914 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zzmg\" (UniqueName: \"kubernetes.io/projected/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-kube-api-access-5zzmg\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.869922 4897 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.869931 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdhhz\" (UniqueName: \"kubernetes.io/projected/e3be6c4a-2460-4245-94f8-36fcc969da66-kube-api-access-bdhhz\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.870155 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.870169 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.971399 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-internal-tls-certs\") pod \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.971473 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-config-data\") pod \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.971643 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-combined-ca-bundle\") pod \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.971683 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c926g\" (UniqueName: \"kubernetes.io/projected/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-kube-api-access-c926g\") pod \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.971717 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-logs\") pod \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.971764 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-public-tls-certs\") pod \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\" (UID: \"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779\") " Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.972991 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-logs" (OuterVolumeSpecName: "logs") pod "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" (UID: "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:41:02 crc kubenswrapper[4897]: I0228 13:41:02.976230 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-kube-api-access-c926g" (OuterVolumeSpecName: "kube-api-access-c926g") pod "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" (UID: "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779"). InnerVolumeSpecName "kube-api-access-c926g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.002696 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" (UID: "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.004561 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-config-data" (OuterVolumeSpecName: "config-data") pod "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" (UID: "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.023583 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" (UID: "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.028631 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" (UID: "d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.074511 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.074563 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.074584 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.074603 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c926g\" (UniqueName: \"kubernetes.io/projected/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-kube-api-access-c926g\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.074623 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.074639 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.077518 4897 generic.go:334] "Generic (PLEG): container finished" podID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerID="a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4" exitCode=0 Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.077592 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.077601 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d","Type":"ContainerDied","Data":"a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4"} Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.077641 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d","Type":"ContainerDied","Data":"c822eb8307ea44333c6d83bd265045fb183a865bf568925f9767f4b495d35a58"} Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.077664 4897 scope.go:117] "RemoveContainer" containerID="a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.080148 4897 generic.go:334] "Generic (PLEG): container finished" podID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerID="1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb" exitCode=0 Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.080292 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.080329 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779","Type":"ContainerDied","Data":"1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb"} Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.080366 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779","Type":"ContainerDied","Data":"e3f0177d48ee07cea19b8c6abc36c51cf36729a410e12ef5a13366316ac7da26"} Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.083740 4897 generic.go:334] "Generic (PLEG): container finished" podID="e3be6c4a-2460-4245-94f8-36fcc969da66" containerID="d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1" exitCode=0 Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.083780 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e3be6c4a-2460-4245-94f8-36fcc969da66","Type":"ContainerDied","Data":"d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1"} Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.083803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e3be6c4a-2460-4245-94f8-36fcc969da66","Type":"ContainerDied","Data":"dac4dd60dd0a7e52676dcd78f29ea12174161d9a3ee259493f167b748374d135"} Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.083847 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.106371 4897 scope.go:117] "RemoveContainer" containerID="74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.125643 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.143536 4897 scope.go:117] "RemoveContainer" containerID="a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.146958 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4\": container with ID starting with a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4 not found: ID does not exist" containerID="a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.146997 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4"} err="failed to get container status \"a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4\": rpc error: code = NotFound desc = could not find container \"a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4\": container with ID starting with a87810745ad69b0517e70d804340480f9a3398ade3c4ada1c59bbe517cf6d4a4 not found: ID does not exist" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.147023 4897 scope.go:117] "RemoveContainer" containerID="74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.149805 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.152631 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593\": container with ID starting with 74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593 not found: ID does not exist" containerID="74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.152669 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593"} err="failed to get container status \"74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593\": rpc error: code = NotFound desc = could not find container \"74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593\": container with ID starting with 74e3e08d8551c0bd74df58ba49879df9133d75f08e9418ff24ab226464774593 not found: ID does not exist" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.152694 4897 scope.go:117] "RemoveContainer" containerID="1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.166433 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.187479 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.218518 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.223134 4897 scope.go:117] "RemoveContainer" containerID="8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.255782 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.256744 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3be6c4a-2460-4245-94f8-36fcc969da66" containerName="nova-scheduler-scheduler" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.256770 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3be6c4a-2460-4245-94f8-36fcc969da66" containerName="nova-scheduler-scheduler" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.256798 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-metadata" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.256807 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-metadata" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.256829 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ccafbfb-14c3-4f61-8fb4-adf29f725d61" containerName="nova-manage" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.256837 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ccafbfb-14c3-4f61-8fb4-adf29f725d61" containerName="nova-manage" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.256853 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e061011e-e58b-458e-aba8-8e0ace759117" containerName="init" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.256861 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e061011e-e58b-458e-aba8-8e0ace759117" containerName="init" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.256944 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-log" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.256955 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-log" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.256993 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-log" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257003 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-log" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.257022 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e061011e-e58b-458e-aba8-8e0ace759117" containerName="dnsmasq-dns" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257031 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e061011e-e58b-458e-aba8-8e0ace759117" containerName="dnsmasq-dns" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.257069 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-api" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257080 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-api" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257691 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-log" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257736 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-api" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257770 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e061011e-e58b-458e-aba8-8e0ace759117" containerName="dnsmasq-dns" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257788 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ccafbfb-14c3-4f61-8fb4-adf29f725d61" containerName="nova-manage" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257822 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3be6c4a-2460-4245-94f8-36fcc969da66" containerName="nova-scheduler-scheduler" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257832 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" containerName="nova-metadata-metadata" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.257840 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" containerName="nova-api-log" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.259987 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.263879 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.267599 4897 scope.go:117] "RemoveContainer" containerID="1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.268687 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb\": container with ID starting with 1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb not found: ID does not exist" containerID="1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.268726 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb"} err="failed to get container status \"1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb\": rpc error: code = NotFound desc = could not find container \"1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb\": container with ID starting with 1d6f90f36d900513b04812eb920e2ac3de9a83865b32e49703ce6e999d5698eb not found: ID does not exist" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.268751 4897 scope.go:117] "RemoveContainer" containerID="8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.269088 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244\": container with ID starting with 8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244 not found: ID does not exist" containerID="8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.269202 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244"} err="failed to get container status \"8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244\": rpc error: code = NotFound desc = could not find container \"8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244\": container with ID starting with 8f5aa678cdbdf686b949525fe8bcaf55172d1ca056619c07804e2b072c033244 not found: ID does not exist" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.269342 4897 scope.go:117] "RemoveContainer" containerID="d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.278586 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.287296 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.289760 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.292948 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.293050 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.292958 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.297946 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.302748 4897 scope.go:117] "RemoveContainer" containerID="d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1" Feb 28 13:41:03 crc kubenswrapper[4897]: E0228 13:41:03.306832 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1\": container with ID starting with d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1 not found: ID does not exist" containerID="d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.306970 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1"} err="failed to get container status \"d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1\": rpc error: code = NotFound desc = could not find container \"d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1\": container with ID starting with d7088f13b2a564ace0610f91be6aaddd1282ab863dbc9bbc6875bb745292a1c1 not found: ID does not exist" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.308941 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.319525 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.322171 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.324259 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.324884 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.340251 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379477 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-public-tls-certs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379521 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c10f14-b08c-4267-8436-22d028c4db66-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379542 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379565 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c10f14-b08c-4267-8436-22d028c4db66-config-data\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379624 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379685 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379713 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-config-data\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379813 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-logs\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.379945 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.380024 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fede0b0b-b487-4e63-9622-4863d3575d89-logs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.380060 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb67p\" (UniqueName: \"kubernetes.io/projected/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-kube-api-access-jb67p\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.380090 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gvq\" (UniqueName: \"kubernetes.io/projected/f5c10f14-b08c-4267-8436-22d028c4db66-kube-api-access-x5gvq\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.380135 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-config-data\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.380173 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkjc8\" (UniqueName: \"kubernetes.io/projected/fede0b0b-b487-4e63-9622-4863d3575d89-kube-api-access-pkjc8\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.481768 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.481844 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.481892 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-config-data\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.481926 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-logs\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.481966 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.481999 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fede0b0b-b487-4e63-9622-4863d3575d89-logs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.482021 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb67p\" (UniqueName: \"kubernetes.io/projected/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-kube-api-access-jb67p\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.482043 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5gvq\" (UniqueName: \"kubernetes.io/projected/f5c10f14-b08c-4267-8436-22d028c4db66-kube-api-access-x5gvq\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.482062 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-config-data\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.482092 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkjc8\" (UniqueName: \"kubernetes.io/projected/fede0b0b-b487-4e63-9622-4863d3575d89-kube-api-access-pkjc8\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.482194 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-public-tls-certs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.482214 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c10f14-b08c-4267-8436-22d028c4db66-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.482233 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.482257 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c10f14-b08c-4267-8436-22d028c4db66-config-data\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.483670 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-logs\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.487914 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fede0b0b-b487-4e63-9622-4863d3575d89-logs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.490151 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-config-data\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.491767 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.492438 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c10f14-b08c-4267-8436-22d028c4db66-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.493069 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c10f14-b08c-4267-8436-22d028c4db66-config-data\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.493394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.493406 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-config-data\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.493476 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-public-tls-certs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.498031 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fede0b0b-b487-4e63-9622-4863d3575d89-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.501518 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.502541 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb67p\" (UniqueName: \"kubernetes.io/projected/151132bb-bcf9-4d40-a72b-5f6b80c23fb1-kube-api-access-jb67p\") pod \"nova-metadata-0\" (UID: \"151132bb-bcf9-4d40-a72b-5f6b80c23fb1\") " pod="openstack/nova-metadata-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.506799 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkjc8\" (UniqueName: \"kubernetes.io/projected/fede0b0b-b487-4e63-9622-4863d3575d89-kube-api-access-pkjc8\") pod \"nova-api-0\" (UID: \"fede0b0b-b487-4e63-9622-4863d3575d89\") " pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.510394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5gvq\" (UniqueName: \"kubernetes.io/projected/f5c10f14-b08c-4267-8436-22d028c4db66-kube-api-access-x5gvq\") pod \"nova-scheduler-0\" (UID: \"f5c10f14-b08c-4267-8436-22d028c4db66\") " pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.582979 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.610793 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 13:41:03 crc kubenswrapper[4897]: I0228 13:41:03.642657 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 13:41:04 crc kubenswrapper[4897]: I0228 13:41:04.065301 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 13:41:04 crc kubenswrapper[4897]: I0228 13:41:04.077507 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 13:41:04 crc kubenswrapper[4897]: I0228 13:41:04.091782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f5c10f14-b08c-4267-8436-22d028c4db66","Type":"ContainerStarted","Data":"5f713f3cc3d98760c0ecb3fcf92af8e7793c02370342f35d24a5ed2f2da5d0c4"} Feb 28 13:41:04 crc kubenswrapper[4897]: I0228 13:41:04.094735 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fede0b0b-b487-4e63-9622-4863d3575d89","Type":"ContainerStarted","Data":"6812dc7e4d0509a17f1661c5f71cf3982119d752bf6855e5f0cd3cc095f6b0d6"} Feb 28 13:41:04 crc kubenswrapper[4897]: I0228 13:41:04.227060 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 13:41:04 crc kubenswrapper[4897]: W0228 13:41:04.229493 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod151132bb_bcf9_4d40_a72b_5f6b80c23fb1.slice/crio-077f9bddc80b9ed6ca354aff3b405804112c0269e44ae2a388eeb91a78feda86 WatchSource:0}: Error finding container 077f9bddc80b9ed6ca354aff3b405804112c0269e44ae2a388eeb91a78feda86: Status 404 returned error can't find the container with id 077f9bddc80b9ed6ca354aff3b405804112c0269e44ae2a388eeb91a78feda86 Feb 28 13:41:04 crc kubenswrapper[4897]: E0228 13:41:04.459775 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:41:04 crc kubenswrapper[4897]: I0228 13:41:04.471106 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d" path="/var/lib/kubelet/pods/95a7b71f-427b-4f9d-97eb-af2ebd6f2c4d/volumes" Feb 28 13:41:04 crc kubenswrapper[4897]: I0228 13:41:04.471811 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779" path="/var/lib/kubelet/pods/d00e0fcd-cd38-4c8e-a7d8-ef557c4f9779/volumes" Feb 28 13:41:04 crc kubenswrapper[4897]: I0228 13:41:04.472370 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3be6c4a-2460-4245-94f8-36fcc969da66" path="/var/lib/kubelet/pods/e3be6c4a-2460-4245-94f8-36fcc969da66/volumes" Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.113601 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"151132bb-bcf9-4d40-a72b-5f6b80c23fb1","Type":"ContainerStarted","Data":"dcd4fee5c4e631b5aa8007520c730276d1cdf35285a7bac724467762e7097d4b"} Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.113965 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"151132bb-bcf9-4d40-a72b-5f6b80c23fb1","Type":"ContainerStarted","Data":"bc27c9d0efa3c1b978d15ca19c73239980f2a2ced861dad80e788297d92a1894"} Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.113983 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"151132bb-bcf9-4d40-a72b-5f6b80c23fb1","Type":"ContainerStarted","Data":"077f9bddc80b9ed6ca354aff3b405804112c0269e44ae2a388eeb91a78feda86"} Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.121057 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fede0b0b-b487-4e63-9622-4863d3575d89","Type":"ContainerStarted","Data":"e6a6bf8d2d7b6decc09d387ac5688d397c1e2aabb31b176aab9b581370e602a3"} Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.121097 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fede0b0b-b487-4e63-9622-4863d3575d89","Type":"ContainerStarted","Data":"e66b02f9f489d1f5cd57a86fa80991be80068f3cfff14dba1bb8f76e4742c140"} Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.123130 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f5c10f14-b08c-4267-8436-22d028c4db66","Type":"ContainerStarted","Data":"be06276b2fdabcbb46fb87ffa9fa0f3169b4c885894fd1c60c8e62603955e89e"} Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.147248 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.147234308 podStartE2EDuration="2.147234308s" podCreationTimestamp="2026-02-28 13:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:41:05.144380965 +0000 UTC m=+1479.386701622" watchObservedRunningTime="2026-02-28 13:41:05.147234308 +0000 UTC m=+1479.389554965" Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.174601 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.174584394 podStartE2EDuration="2.174584394s" podCreationTimestamp="2026-02-28 13:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:41:05.169980142 +0000 UTC m=+1479.412300809" watchObservedRunningTime="2026-02-28 13:41:05.174584394 +0000 UTC m=+1479.416905051" Feb 28 13:41:05 crc kubenswrapper[4897]: I0228 13:41:05.192904 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.192886071 podStartE2EDuration="2.192886071s" podCreationTimestamp="2026-02-28 13:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:41:05.184916712 +0000 UTC m=+1479.427237379" watchObservedRunningTime="2026-02-28 13:41:05.192886071 +0000 UTC m=+1479.435206728" Feb 28 13:41:05 crc kubenswrapper[4897]: E0228 13:41:05.458409 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:41:07 crc kubenswrapper[4897]: E0228 13:41:07.140692 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:41:07 crc kubenswrapper[4897]: E0228 13:41:07.141124 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hjth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(746916d9-ca42-480b-9aa7-7e1fe9803900): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:41:07 crc kubenswrapper[4897]: E0228 13:41:07.142753 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:41:08 crc kubenswrapper[4897]: I0228 13:41:08.583300 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 13:41:08 crc kubenswrapper[4897]: I0228 13:41:08.642969 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 13:41:08 crc kubenswrapper[4897]: I0228 13:41:08.643051 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 13:41:13 crc kubenswrapper[4897]: I0228 13:41:13.583470 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 13:41:13 crc kubenswrapper[4897]: I0228 13:41:13.611386 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 13:41:13 crc kubenswrapper[4897]: I0228 13:41:13.611462 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 13:41:13 crc kubenswrapper[4897]: I0228 13:41:13.643202 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 13:41:13 crc kubenswrapper[4897]: I0228 13:41:13.643301 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 13:41:13 crc kubenswrapper[4897]: I0228 13:41:13.653148 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 13:41:14 crc kubenswrapper[4897]: I0228 13:41:14.275457 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 13:41:14 crc kubenswrapper[4897]: E0228 13:41:14.458903 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:41:14 crc kubenswrapper[4897]: I0228 13:41:14.644895 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fede0b0b-b487-4e63-9622-4863d3575d89" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 13:41:14 crc kubenswrapper[4897]: I0228 13:41:14.645569 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fede0b0b-b487-4e63-9622-4863d3575d89" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 13:41:14 crc kubenswrapper[4897]: I0228 13:41:14.662515 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="151132bb-bcf9-4d40-a72b-5f6b80c23fb1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 13:41:14 crc kubenswrapper[4897]: I0228 13:41:14.662583 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="151132bb-bcf9-4d40-a72b-5f6b80c23fb1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 13:41:16 crc kubenswrapper[4897]: I0228 13:41:16.473636 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 13:41:17 crc kubenswrapper[4897]: E0228 13:41:17.042815 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:41:17 crc kubenswrapper[4897]: E0228 13:41:17.043423 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wpnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-29kqk_openshift-marketplace(dbe86f80-68e4-4170-8801-cea07c362d5c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:41:17 crc kubenswrapper[4897]: E0228 13:41:17.044551 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:41:19 crc kubenswrapper[4897]: E0228 13:41:19.459997 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:41:19 crc kubenswrapper[4897]: E0228 13:41:19.460730 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:41:23 crc kubenswrapper[4897]: I0228 13:41:23.629818 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 13:41:23 crc kubenswrapper[4897]: I0228 13:41:23.632349 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 13:41:23 crc kubenswrapper[4897]: I0228 13:41:23.633215 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 13:41:23 crc kubenswrapper[4897]: I0228 13:41:23.648161 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 13:41:23 crc kubenswrapper[4897]: I0228 13:41:23.650267 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 13:41:23 crc kubenswrapper[4897]: I0228 13:41:23.654181 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 13:41:23 crc kubenswrapper[4897]: I0228 13:41:23.662018 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 13:41:24 crc kubenswrapper[4897]: I0228 13:41:24.359383 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 13:41:24 crc kubenswrapper[4897]: I0228 13:41:24.374710 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 13:41:24 crc kubenswrapper[4897]: I0228 13:41:24.376553 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.200139 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bjhzq"] Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.203852 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.220070 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjhzq"] Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.359474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-utilities\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.359588 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-catalog-content\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.359655 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxzw\" (UniqueName: \"kubernetes.io/projected/14af1d5e-f67c-4675-afb4-4aff4b78237c-kube-api-access-vdxzw\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: E0228 13:41:28.458875 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:41:28 crc kubenswrapper[4897]: E0228 13:41:28.458951 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.460931 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-utilities\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.461019 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-catalog-content\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.461094 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxzw\" (UniqueName: \"kubernetes.io/projected/14af1d5e-f67c-4675-afb4-4aff4b78237c-kube-api-access-vdxzw\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.461544 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-catalog-content\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.461652 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-utilities\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.484466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxzw\" (UniqueName: \"kubernetes.io/projected/14af1d5e-f67c-4675-afb4-4aff4b78237c-kube-api-access-vdxzw\") pod \"redhat-marketplace-bjhzq\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:28 crc kubenswrapper[4897]: I0228 13:41:28.595663 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:41:29 crc kubenswrapper[4897]: I0228 13:41:29.054621 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjhzq"] Feb 28 13:41:29 crc kubenswrapper[4897]: W0228 13:41:29.063013 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14af1d5e_f67c_4675_afb4_4aff4b78237c.slice/crio-2c1b7e551e717f20d4943bd55eb4848b43081d758370c6a77422c644067eb4df WatchSource:0}: Error finding container 2c1b7e551e717f20d4943bd55eb4848b43081d758370c6a77422c644067eb4df: Status 404 returned error can't find the container with id 2c1b7e551e717f20d4943bd55eb4848b43081d758370c6a77422c644067eb4df Feb 28 13:41:29 crc kubenswrapper[4897]: I0228 13:41:29.423545 4897 generic.go:334] "Generic (PLEG): container finished" podID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerID="3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961" exitCode=0 Feb 28 13:41:29 crc kubenswrapper[4897]: I0228 13:41:29.423590 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjhzq" event={"ID":"14af1d5e-f67c-4675-afb4-4aff4b78237c","Type":"ContainerDied","Data":"3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961"} Feb 28 13:41:29 crc kubenswrapper[4897]: I0228 13:41:29.423617 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjhzq" event={"ID":"14af1d5e-f67c-4675-afb4-4aff4b78237c","Type":"ContainerStarted","Data":"2c1b7e551e717f20d4943bd55eb4848b43081d758370c6a77422c644067eb4df"} Feb 28 13:41:30 crc kubenswrapper[4897]: E0228 13:41:30.019469 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 13:41:30 crc kubenswrapper[4897]: E0228 13:41:30.019993 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdxzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bjhzq_openshift-marketplace(14af1d5e-f67c-4675-afb4-4aff4b78237c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:41:30 crc kubenswrapper[4897]: E0228 13:41:30.021255 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:41:30 crc kubenswrapper[4897]: E0228 13:41:30.435718 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:41:32 crc kubenswrapper[4897]: E0228 13:41:32.069695 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:41:32 crc kubenswrapper[4897]: E0228 13:41:32.070247 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hjth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(746916d9-ca42-480b-9aa7-7e1fe9803900): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:41:32 crc kubenswrapper[4897]: E0228 13:41:32.071483 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:41:32 crc kubenswrapper[4897]: E0228 13:41:32.351416 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:41:32 crc kubenswrapper[4897]: E0228 13:41:32.351579 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:41:32 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:41:32 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpqzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538100-v4j6s_openshift-infra(b6642318-7bfd-49f2-86e3-0fe4a7ec2709): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:41:32 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:41:32 crc kubenswrapper[4897]: E0228 13:41:32.352845 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:41:41 crc kubenswrapper[4897]: E0228 13:41:41.461874 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:41:43 crc kubenswrapper[4897]: E0228 13:41:43.032981 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 13:41:43 crc kubenswrapper[4897]: E0228 13:41:43.033552 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdxzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bjhzq_openshift-marketplace(14af1d5e-f67c-4675-afb4-4aff4b78237c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:41:43 crc kubenswrapper[4897]: E0228 13:41:43.034969 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:41:46 crc kubenswrapper[4897]: E0228 13:41:46.472389 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:41:46 crc kubenswrapper[4897]: E0228 13:41:46.472740 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:41:53 crc kubenswrapper[4897]: E0228 13:41:53.458685 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:41:57 crc kubenswrapper[4897]: E0228 13:41:57.464460 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:42:00 crc kubenswrapper[4897]: I0228 13:42:00.183503 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538102-l46vx"] Feb 28 13:42:00 crc kubenswrapper[4897]: I0228 13:42:00.187548 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538102-l46vx" Feb 28 13:42:00 crc kubenswrapper[4897]: I0228 13:42:00.206679 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538102-l46vx"] Feb 28 13:42:00 crc kubenswrapper[4897]: I0228 13:42:00.289401 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67lnl\" (UniqueName: \"kubernetes.io/projected/c0fa54bd-caa0-4a38-a45b-a5e6646e3843-kube-api-access-67lnl\") pod \"auto-csr-approver-29538102-l46vx\" (UID: \"c0fa54bd-caa0-4a38-a45b-a5e6646e3843\") " pod="openshift-infra/auto-csr-approver-29538102-l46vx" Feb 28 13:42:00 crc kubenswrapper[4897]: I0228 13:42:00.391796 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67lnl\" (UniqueName: \"kubernetes.io/projected/c0fa54bd-caa0-4a38-a45b-a5e6646e3843-kube-api-access-67lnl\") pod \"auto-csr-approver-29538102-l46vx\" (UID: \"c0fa54bd-caa0-4a38-a45b-a5e6646e3843\") " pod="openshift-infra/auto-csr-approver-29538102-l46vx" Feb 28 13:42:00 crc kubenswrapper[4897]: I0228 13:42:00.421142 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67lnl\" (UniqueName: \"kubernetes.io/projected/c0fa54bd-caa0-4a38-a45b-a5e6646e3843-kube-api-access-67lnl\") pod \"auto-csr-approver-29538102-l46vx\" (UID: \"c0fa54bd-caa0-4a38-a45b-a5e6646e3843\") " pod="openshift-infra/auto-csr-approver-29538102-l46vx" Feb 28 13:42:00 crc kubenswrapper[4897]: E0228 13:42:00.463660 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:42:00 crc kubenswrapper[4897]: I0228 13:42:00.518538 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538102-l46vx" Feb 28 13:42:01 crc kubenswrapper[4897]: W0228 13:42:01.064800 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0fa54bd_caa0_4a38_a45b_a5e6646e3843.slice/crio-32c34565f8ef1d2e83d3e5599da55b1a6f7b2bda4a278a44f03d93d905d9ccdb WatchSource:0}: Error finding container 32c34565f8ef1d2e83d3e5599da55b1a6f7b2bda4a278a44f03d93d905d9ccdb: Status 404 returned error can't find the container with id 32c34565f8ef1d2e83d3e5599da55b1a6f7b2bda4a278a44f03d93d905d9ccdb Feb 28 13:42:01 crc kubenswrapper[4897]: I0228 13:42:01.073887 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538102-l46vx"] Feb 28 13:42:01 crc kubenswrapper[4897]: E0228 13:42:01.459600 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:42:01 crc kubenswrapper[4897]: I0228 13:42:01.815828 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538102-l46vx" event={"ID":"c0fa54bd-caa0-4a38-a45b-a5e6646e3843","Type":"ContainerStarted","Data":"32c34565f8ef1d2e83d3e5599da55b1a6f7b2bda4a278a44f03d93d905d9ccdb"} Feb 28 13:42:02 crc kubenswrapper[4897]: E0228 13:42:02.076754 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:42:02 crc kubenswrapper[4897]: E0228 13:42:02.077003 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:42:02 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:42:02 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67lnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538102-l46vx_openshift-infra(c0fa54bd-caa0-4a38-a45b-a5e6646e3843): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:42:02 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:42:02 crc kubenswrapper[4897]: E0228 13:42:02.078274 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:42:02 crc kubenswrapper[4897]: E0228 13:42:02.831274 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:42:03 crc kubenswrapper[4897]: I0228 13:42:03.371128 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:42:03 crc kubenswrapper[4897]: I0228 13:42:03.371232 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:42:03 crc kubenswrapper[4897]: E0228 13:42:03.631893 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 28 13:42:03 crc kubenswrapper[4897]: E0228 13:42:03.632103 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:42:03 crc kubenswrapper[4897]: E0228 13:42:03.633808 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:42:05 crc kubenswrapper[4897]: E0228 13:42:05.459013 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:42:10 crc kubenswrapper[4897]: E0228 13:42:10.979156 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 13:42:10 crc kubenswrapper[4897]: E0228 13:42:10.980029 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdxzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bjhzq_openshift-marketplace(14af1d5e-f67c-4675-afb4-4aff4b78237c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:42:10 crc kubenswrapper[4897]: E0228 13:42:10.981420 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:42:14 crc kubenswrapper[4897]: E0228 13:42:14.274792 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:42:14 crc kubenswrapper[4897]: E0228 13:42:14.275986 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hjth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(746916d9-ca42-480b-9aa7-7e1fe9803900): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:42:14 crc kubenswrapper[4897]: E0228 13:42:14.277808 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:42:14 crc kubenswrapper[4897]: E0228 13:42:14.459630 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:42:14 crc kubenswrapper[4897]: E0228 13:42:14.460421 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:42:14 crc kubenswrapper[4897]: E0228 13:42:14.721221 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:42:14 crc kubenswrapper[4897]: E0228 13:42:14.721457 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:42:14 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:42:14 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67lnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538102-l46vx_openshift-infra(c0fa54bd-caa0-4a38-a45b-a5e6646e3843): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:42:14 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:42:14 crc kubenswrapper[4897]: E0228 13:42:14.722580 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:42:16 crc kubenswrapper[4897]: E0228 13:42:16.475925 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:42:24 crc kubenswrapper[4897]: E0228 13:42:24.463131 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:42:25 crc kubenswrapper[4897]: E0228 13:42:25.458761 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:42:26 crc kubenswrapper[4897]: E0228 13:42:26.478705 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:42:27 crc kubenswrapper[4897]: E0228 13:42:27.459673 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:42:29 crc kubenswrapper[4897]: E0228 13:42:29.458302 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:42:30 crc kubenswrapper[4897]: E0228 13:42:30.459544 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:42:33 crc kubenswrapper[4897]: I0228 13:42:33.371124 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:42:33 crc kubenswrapper[4897]: I0228 13:42:33.371685 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:42:36 crc kubenswrapper[4897]: E0228 13:42:36.468624 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:42:37 crc kubenswrapper[4897]: E0228 13:42:37.458193 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:42:37 crc kubenswrapper[4897]: E0228 13:42:37.920471 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:42:37 crc kubenswrapper[4897]: E0228 13:42:37.920788 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:42:37 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:42:37 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67lnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538102-l46vx_openshift-infra(c0fa54bd-caa0-4a38-a45b-a5e6646e3843): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:42:37 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:42:37 crc kubenswrapper[4897]: E0228 13:42:37.922955 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:42:41 crc kubenswrapper[4897]: E0228 13:42:41.459555 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:42:42 crc kubenswrapper[4897]: I0228 13:42:42.011931 4897 scope.go:117] "RemoveContainer" containerID="e9876136638b45115c2b65012255b207b4c851ff5fc56e4393637fa75ed81366" Feb 28 13:42:42 crc kubenswrapper[4897]: I0228 13:42:42.051902 4897 scope.go:117] "RemoveContainer" containerID="9dee223eefb6b4d79a339b608d6a8789533eeff1e8627be46d2373a28f22b7b8" Feb 28 13:42:42 crc kubenswrapper[4897]: E0228 13:42:42.458549 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:42:42 crc kubenswrapper[4897]: E0228 13:42:42.459147 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:42:50 crc kubenswrapper[4897]: E0228 13:42:50.465365 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:42:52 crc kubenswrapper[4897]: E0228 13:42:52.125362 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 13:42:52 crc kubenswrapper[4897]: E0228 13:42:52.125578 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdxzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bjhzq_openshift-marketplace(14af1d5e-f67c-4675-afb4-4aff4b78237c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:42:52 crc kubenswrapper[4897]: E0228 13:42:52.126900 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:42:53 crc kubenswrapper[4897]: E0228 13:42:53.405014 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:42:53 crc kubenswrapper[4897]: E0228 13:42:53.405519 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:42:53 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:42:53 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpqzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538100-v4j6s_openshift-infra(b6642318-7bfd-49f2-86e3-0fe4a7ec2709): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:42:53 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:42:53 crc kubenswrapper[4897]: E0228 13:42:53.406761 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:42:55 crc kubenswrapper[4897]: E0228 13:42:55.460505 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:42:55 crc kubenswrapper[4897]: E0228 13:42:55.461386 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:42:56 crc kubenswrapper[4897]: E0228 13:42:56.480501 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:43:02 crc kubenswrapper[4897]: E0228 13:43:02.459374 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.371427 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.371798 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.371871 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.372990 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.373113 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" gracePeriod=600 Feb 28 13:43:03 crc kubenswrapper[4897]: E0228 13:43:03.504004 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.636097 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" exitCode=0 Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.636162 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c"} Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.636221 4897 scope.go:117] "RemoveContainer" containerID="2251fe7bbe6b22484b56b41016e482aae198972b32b2a8de419f213131379efa" Feb 28 13:43:03 crc kubenswrapper[4897]: I0228 13:43:03.637377 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:43:03 crc kubenswrapper[4897]: E0228 13:43:03.637952 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:43:04 crc kubenswrapper[4897]: E0228 13:43:04.458920 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:43:06 crc kubenswrapper[4897]: E0228 13:43:06.471932 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:43:07 crc kubenswrapper[4897]: E0228 13:43:07.458017 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:43:07 crc kubenswrapper[4897]: E0228 13:43:07.459656 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:43:11 crc kubenswrapper[4897]: E0228 13:43:11.460033 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:43:14 crc kubenswrapper[4897]: E0228 13:43:14.459886 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:43:16 crc kubenswrapper[4897]: I0228 13:43:16.478415 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:43:16 crc kubenswrapper[4897]: E0228 13:43:16.480743 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:43:18 crc kubenswrapper[4897]: E0228 13:43:18.462562 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:43:19 crc kubenswrapper[4897]: E0228 13:43:19.461003 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:43:19 crc kubenswrapper[4897]: E0228 13:43:19.461715 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:43:22 crc kubenswrapper[4897]: E0228 13:43:22.459235 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.295760 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rt5s7"] Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.300784 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.322294 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rt5s7"] Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.356717 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-utilities\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.357076 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-catalog-content\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.357148 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gxvr\" (UniqueName: \"kubernetes.io/projected/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-kube-api-access-5gxvr\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.458179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-utilities\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.458256 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-catalog-content\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.458349 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gxvr\" (UniqueName: \"kubernetes.io/projected/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-kube-api-access-5gxvr\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.458695 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-utilities\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.458792 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-catalog-content\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.487498 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gxvr\" (UniqueName: \"kubernetes.io/projected/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-kube-api-access-5gxvr\") pod \"certified-operators-rt5s7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:25 crc kubenswrapper[4897]: I0228 13:43:25.645727 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:43:26 crc kubenswrapper[4897]: I0228 13:43:26.184194 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rt5s7"] Feb 28 13:43:26 crc kubenswrapper[4897]: E0228 13:43:26.479102 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:43:26 crc kubenswrapper[4897]: I0228 13:43:26.930885 4897 generic.go:334] "Generic (PLEG): container finished" podID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerID="58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57" exitCode=0 Feb 28 13:43:26 crc kubenswrapper[4897]: I0228 13:43:26.930959 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rt5s7" event={"ID":"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7","Type":"ContainerDied","Data":"58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57"} Feb 28 13:43:26 crc kubenswrapper[4897]: I0228 13:43:26.931024 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rt5s7" event={"ID":"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7","Type":"ContainerStarted","Data":"b9701f66bb9eab7532e952e32190c1ac5bd9167c5510380f1d83c35696dd9eda"} Feb 28 13:43:27 crc kubenswrapper[4897]: E0228 13:43:27.457814 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 13:43:27 crc kubenswrapper[4897]: E0228 13:43:27.458226 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gxvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rt5s7_openshift-marketplace(b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:43:27 crc kubenswrapper[4897]: E0228 13:43:27.459503 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-rt5s7" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" Feb 28 13:43:27 crc kubenswrapper[4897]: E0228 13:43:27.942058 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rt5s7" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" Feb 28 13:43:29 crc kubenswrapper[4897]: E0228 13:43:29.292795 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:43:29 crc kubenswrapper[4897]: E0228 13:43:29.293233 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:43:29 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:43:29 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67lnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538102-l46vx_openshift-infra(c0fa54bd-caa0-4a38-a45b-a5e6646e3843): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:43:29 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:43:29 crc kubenswrapper[4897]: E0228 13:43:29.294422 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:43:30 crc kubenswrapper[4897]: I0228 13:43:30.457190 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:43:30 crc kubenswrapper[4897]: E0228 13:43:30.457712 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:43:30 crc kubenswrapper[4897]: E0228 13:43:30.459607 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:43:32 crc kubenswrapper[4897]: E0228 13:43:32.470245 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:43:33 crc kubenswrapper[4897]: E0228 13:43:33.460050 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:43:35 crc kubenswrapper[4897]: E0228 13:43:35.449677 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0ed3d2c_1f54_445d_8e0d_1908ac9e03c7.slice/crio-conmon-58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57.scope\": RecentStats: unable to find data in memory cache]" Feb 28 13:43:37 crc kubenswrapper[4897]: E0228 13:43:37.458632 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:43:38 crc kubenswrapper[4897]: E0228 13:43:38.971629 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:43:38 crc kubenswrapper[4897]: E0228 13:43:38.972374 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hjth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(746916d9-ca42-480b-9aa7-7e1fe9803900): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:43:38 crc kubenswrapper[4897]: E0228 13:43:38.973711 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:43:42 crc kubenswrapper[4897]: I0228 13:43:42.210834 4897 scope.go:117] "RemoveContainer" containerID="0cf5b56e8358f322115ad41c5d78dfa52a9b2968416efd9dec0619ec14632f49" Feb 28 13:43:42 crc kubenswrapper[4897]: I0228 13:43:42.240883 4897 scope.go:117] "RemoveContainer" containerID="56c1675f0c4a6d9defb4014225da8424a6ebe483c3a906ece2fac996a4dc08e7" Feb 28 13:43:43 crc kubenswrapper[4897]: E0228 13:43:43.458604 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:43:44 crc kubenswrapper[4897]: E0228 13:43:44.457491 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:43:45 crc kubenswrapper[4897]: I0228 13:43:45.458806 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:43:45 crc kubenswrapper[4897]: E0228 13:43:45.459196 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:43:45 crc kubenswrapper[4897]: E0228 13:43:45.459391 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:43:45 crc kubenswrapper[4897]: E0228 13:43:45.460413 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:43:45 crc kubenswrapper[4897]: E0228 13:43:45.790923 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0ed3d2c_1f54_445d_8e0d_1908ac9e03c7.slice/crio-conmon-58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57.scope\": RecentStats: unable to find data in memory cache]" Feb 28 13:43:49 crc kubenswrapper[4897]: E0228 13:43:49.459244 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:43:52 crc kubenswrapper[4897]: E0228 13:43:52.459262 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:43:54 crc kubenswrapper[4897]: I0228 13:43:54.268951 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rt5s7" event={"ID":"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7","Type":"ContainerStarted","Data":"6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a"} Feb 28 13:43:55 crc kubenswrapper[4897]: I0228 13:43:55.281921 4897 generic.go:334] "Generic (PLEG): container finished" podID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerID="6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a" exitCode=0 Feb 28 13:43:55 crc kubenswrapper[4897]: I0228 13:43:55.282010 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rt5s7" event={"ID":"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7","Type":"ContainerDied","Data":"6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a"} Feb 28 13:43:55 crc kubenswrapper[4897]: E0228 13:43:55.458214 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:43:56 crc kubenswrapper[4897]: E0228 13:43:56.123242 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0ed3d2c_1f54_445d_8e0d_1908ac9e03c7.slice/crio-conmon-58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57.scope\": RecentStats: unable to find data in memory cache]" Feb 28 13:43:56 crc kubenswrapper[4897]: I0228 13:43:56.294912 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rt5s7" event={"ID":"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7","Type":"ContainerStarted","Data":"a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79"} Feb 28 13:43:56 crc kubenswrapper[4897]: I0228 13:43:56.335906 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rt5s7" podStartSLOduration=2.405210713 podStartE2EDuration="31.335884219s" podCreationTimestamp="2026-02-28 13:43:25 +0000 UTC" firstStartedPulling="2026-02-28 13:43:26.933413342 +0000 UTC m=+1621.175734009" lastFinishedPulling="2026-02-28 13:43:55.864086848 +0000 UTC m=+1650.106407515" observedRunningTime="2026-02-28 13:43:56.321291359 +0000 UTC m=+1650.563612016" watchObservedRunningTime="2026-02-28 13:43:56.335884219 +0000 UTC m=+1650.578204886" Feb 28 13:43:58 crc kubenswrapper[4897]: E0228 13:43:58.457916 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:43:58 crc kubenswrapper[4897]: E0228 13:43:58.458044 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.145844 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538104-knc25"] Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.147461 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538104-knc25" Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.154975 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538104-knc25"] Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.262001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrnk\" (UniqueName: \"kubernetes.io/projected/68beed07-352c-43c3-9750-f5e63fcab99a-kube-api-access-hlrnk\") pod \"auto-csr-approver-29538104-knc25\" (UID: \"68beed07-352c-43c3-9750-f5e63fcab99a\") " pod="openshift-infra/auto-csr-approver-29538104-knc25" Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.363647 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlrnk\" (UniqueName: \"kubernetes.io/projected/68beed07-352c-43c3-9750-f5e63fcab99a-kube-api-access-hlrnk\") pod \"auto-csr-approver-29538104-knc25\" (UID: \"68beed07-352c-43c3-9750-f5e63fcab99a\") " pod="openshift-infra/auto-csr-approver-29538104-knc25" Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.389627 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlrnk\" (UniqueName: \"kubernetes.io/projected/68beed07-352c-43c3-9750-f5e63fcab99a-kube-api-access-hlrnk\") pod \"auto-csr-approver-29538104-knc25\" (UID: \"68beed07-352c-43c3-9750-f5e63fcab99a\") " pod="openshift-infra/auto-csr-approver-29538104-knc25" Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.457834 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:44:00 crc kubenswrapper[4897]: E0228 13:44:00.458206 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:44:00 crc kubenswrapper[4897]: E0228 13:44:00.462124 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:44:00 crc kubenswrapper[4897]: E0228 13:44:00.462150 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.500952 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538104-knc25" Feb 28 13:44:00 crc kubenswrapper[4897]: I0228 13:44:00.978909 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538104-knc25"] Feb 28 13:44:00 crc kubenswrapper[4897]: W0228 13:44:00.982482 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68beed07_352c_43c3_9750_f5e63fcab99a.slice/crio-4a16cf11c335907dccedc29c1b848cce8f49704c25ee4cbb45e97a9494441a31 WatchSource:0}: Error finding container 4a16cf11c335907dccedc29c1b848cce8f49704c25ee4cbb45e97a9494441a31: Status 404 returned error can't find the container with id 4a16cf11c335907dccedc29c1b848cce8f49704c25ee4cbb45e97a9494441a31 Feb 28 13:44:01 crc kubenswrapper[4897]: I0228 13:44:01.346814 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538104-knc25" event={"ID":"68beed07-352c-43c3-9750-f5e63fcab99a","Type":"ContainerStarted","Data":"4a16cf11c335907dccedc29c1b848cce8f49704c25ee4cbb45e97a9494441a31"} Feb 28 13:44:01 crc kubenswrapper[4897]: E0228 13:44:01.957535 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:44:01 crc kubenswrapper[4897]: E0228 13:44:01.958019 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:44:01 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:44:01 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hlrnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538104-knc25_openshift-infra(68beed07-352c-43c3-9750-f5e63fcab99a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:44:01 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:44:01 crc kubenswrapper[4897]: E0228 13:44:01.959580 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538104-knc25" podUID="68beed07-352c-43c3-9750-f5e63fcab99a" Feb 28 13:44:02 crc kubenswrapper[4897]: E0228 13:44:02.361484 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538104-knc25" podUID="68beed07-352c-43c3-9750-f5e63fcab99a" Feb 28 13:44:05 crc kubenswrapper[4897]: I0228 13:44:05.646696 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:44:05 crc kubenswrapper[4897]: I0228 13:44:05.648730 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:44:05 crc kubenswrapper[4897]: I0228 13:44:05.732956 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:44:06 crc kubenswrapper[4897]: E0228 13:44:06.430808 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0ed3d2c_1f54_445d_8e0d_1908ac9e03c7.slice/crio-conmon-58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57.scope\": RecentStats: unable to find data in memory cache]" Feb 28 13:44:06 crc kubenswrapper[4897]: E0228 13:44:06.459705 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:44:06 crc kubenswrapper[4897]: E0228 13:44:06.459788 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:44:06 crc kubenswrapper[4897]: I0228 13:44:06.499989 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:44:06 crc kubenswrapper[4897]: I0228 13:44:06.557465 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rt5s7"] Feb 28 13:44:08 crc kubenswrapper[4897]: I0228 13:44:08.434351 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rt5s7" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerName="registry-server" containerID="cri-o://a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79" gracePeriod=2 Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.016648 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.159109 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-catalog-content\") pod \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.159269 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-utilities\") pod \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.159406 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gxvr\" (UniqueName: \"kubernetes.io/projected/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-kube-api-access-5gxvr\") pod \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\" (UID: \"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7\") " Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.162054 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-utilities" (OuterVolumeSpecName: "utilities") pod "b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" (UID: "b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.171217 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-kube-api-access-5gxvr" (OuterVolumeSpecName: "kube-api-access-5gxvr") pod "b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" (UID: "b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7"). InnerVolumeSpecName "kube-api-access-5gxvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.223044 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" (UID: "b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.262201 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.262239 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.262249 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gxvr\" (UniqueName: \"kubernetes.io/projected/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7-kube-api-access-5gxvr\") on node \"crc\" DevicePath \"\"" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.445621 4897 generic.go:334] "Generic (PLEG): container finished" podID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerID="a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79" exitCode=0 Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.445656 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rt5s7" event={"ID":"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7","Type":"ContainerDied","Data":"a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79"} Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.445729 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rt5s7" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.445747 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rt5s7" event={"ID":"b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7","Type":"ContainerDied","Data":"b9701f66bb9eab7532e952e32190c1ac5bd9167c5510380f1d83c35696dd9eda"} Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.445832 4897 scope.go:117] "RemoveContainer" containerID="a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.474422 4897 scope.go:117] "RemoveContainer" containerID="6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.508304 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rt5s7"] Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.526275 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rt5s7"] Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.530944 4897 scope.go:117] "RemoveContainer" containerID="58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.574162 4897 scope.go:117] "RemoveContainer" containerID="a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79" Feb 28 13:44:09 crc kubenswrapper[4897]: E0228 13:44:09.574636 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79\": container with ID starting with a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79 not found: ID does not exist" containerID="a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.574675 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79"} err="failed to get container status \"a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79\": rpc error: code = NotFound desc = could not find container \"a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79\": container with ID starting with a3362b28aa9857a1561a084d12121b2e25b34f24c1b0fd4963cf8ef81c04aa79 not found: ID does not exist" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.574701 4897 scope.go:117] "RemoveContainer" containerID="6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a" Feb 28 13:44:09 crc kubenswrapper[4897]: E0228 13:44:09.575078 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a\": container with ID starting with 6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a not found: ID does not exist" containerID="6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.575112 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a"} err="failed to get container status \"6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a\": rpc error: code = NotFound desc = could not find container \"6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a\": container with ID starting with 6256d035d5ee0d202c2e679bc53c4ed05001f1c05d39a233a8892bdd6e62149a not found: ID does not exist" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.575133 4897 scope.go:117] "RemoveContainer" containerID="58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57" Feb 28 13:44:09 crc kubenswrapper[4897]: E0228 13:44:09.575409 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57\": container with ID starting with 58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57 not found: ID does not exist" containerID="58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57" Feb 28 13:44:09 crc kubenswrapper[4897]: I0228 13:44:09.575432 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57"} err="failed to get container status \"58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57\": rpc error: code = NotFound desc = could not find container \"58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57\": container with ID starting with 58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57 not found: ID does not exist" Feb 28 13:44:10 crc kubenswrapper[4897]: I0228 13:44:10.525147 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" path="/var/lib/kubelet/pods/b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7/volumes" Feb 28 13:44:11 crc kubenswrapper[4897]: E0228 13:44:11.459473 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:44:12 crc kubenswrapper[4897]: E0228 13:44:12.460001 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:44:13 crc kubenswrapper[4897]: I0228 13:44:13.456028 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:44:13 crc kubenswrapper[4897]: E0228 13:44:13.456260 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:44:14 crc kubenswrapper[4897]: E0228 13:44:14.132719 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 13:44:14 crc kubenswrapper[4897]: E0228 13:44:14.135189 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdxzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bjhzq_openshift-marketplace(14af1d5e-f67c-4675-afb4-4aff4b78237c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:44:14 crc kubenswrapper[4897]: E0228 13:44:14.136960 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:44:14 crc kubenswrapper[4897]: E0228 13:44:14.637532 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:44:14 crc kubenswrapper[4897]: E0228 13:44:14.637730 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:44:14 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:44:14 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hlrnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538104-knc25_openshift-infra(68beed07-352c-43c3-9750-f5e63fcab99a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:44:14 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:44:14 crc kubenswrapper[4897]: E0228 13:44:14.648769 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538104-knc25" podUID="68beed07-352c-43c3-9750-f5e63fcab99a" Feb 28 13:44:15 crc kubenswrapper[4897]: E0228 13:44:15.457573 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:44:16 crc kubenswrapper[4897]: E0228 13:44:16.747276 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0ed3d2c_1f54_445d_8e0d_1908ac9e03c7.slice/crio-conmon-58729c3e0ee7538f73d5471499d0a26fd10586a91061aca767487cbc01346a57.scope\": RecentStats: unable to find data in memory cache]" Feb 28 13:44:18 crc kubenswrapper[4897]: E0228 13:44:18.460065 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:44:19 crc kubenswrapper[4897]: E0228 13:44:19.458649 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:44:24 crc kubenswrapper[4897]: E0228 13:44:24.469935 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:44:25 crc kubenswrapper[4897]: E0228 13:44:25.458886 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:44:27 crc kubenswrapper[4897]: I0228 13:44:27.459527 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:44:27 crc kubenswrapper[4897]: E0228 13:44:27.460198 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:44:27 crc kubenswrapper[4897]: E0228 13:44:27.460789 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:44:28 crc kubenswrapper[4897]: E0228 13:44:28.458455 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:44:29 crc kubenswrapper[4897]: E0228 13:44:29.459687 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:44:29 crc kubenswrapper[4897]: E0228 13:44:29.460714 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538104-knc25" podUID="68beed07-352c-43c3-9750-f5e63fcab99a" Feb 28 13:44:30 crc kubenswrapper[4897]: E0228 13:44:30.458697 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:44:36 crc kubenswrapper[4897]: E0228 13:44:36.471852 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:44:36 crc kubenswrapper[4897]: E0228 13:44:36.472085 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:44:39 crc kubenswrapper[4897]: E0228 13:44:39.459723 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" Feb 28 13:44:41 crc kubenswrapper[4897]: E0228 13:44:41.460395 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:44:42 crc kubenswrapper[4897]: I0228 13:44:42.354220 4897 scope.go:117] "RemoveContainer" containerID="9e34f2e6495c1fe81abdf0d50a40373b2771813b4d6d5371f7966b9865bf9d36" Feb 28 13:44:42 crc kubenswrapper[4897]: I0228 13:44:42.383174 4897 scope.go:117] "RemoveContainer" containerID="8afe08a8012900b93dcc91888cfb0570e04eb144f0c8e5affa8382e765241f75" Feb 28 13:44:42 crc kubenswrapper[4897]: I0228 13:44:42.408763 4897 scope.go:117] "RemoveContainer" containerID="b8c42906a6f7073722e82fc7f395ccaac5c92a998f9d6922a6d37271f20323d1" Feb 28 13:44:42 crc kubenswrapper[4897]: I0228 13:44:42.436222 4897 scope.go:117] "RemoveContainer" containerID="eef4761385fed15a9a6e49dccbd224ac9626ad14ebd4ecdc6a0f42ee0b6d8e58" Feb 28 13:44:42 crc kubenswrapper[4897]: I0228 13:44:42.457677 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:44:42 crc kubenswrapper[4897]: E0228 13:44:42.457982 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:44:42 crc kubenswrapper[4897]: E0228 13:44:42.465680 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538102-l46vx" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" Feb 28 13:44:42 crc kubenswrapper[4897]: I0228 13:44:42.911512 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538104-knc25" event={"ID":"68beed07-352c-43c3-9750-f5e63fcab99a","Type":"ContainerStarted","Data":"9184d269b372b09e0171b692c1bf6fcf54eaec01988d7d16e77be0f91d908227"} Feb 28 13:44:42 crc kubenswrapper[4897]: I0228 13:44:42.932999 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538104-knc25" podStartSLOduration=1.453811897 podStartE2EDuration="42.932978796s" podCreationTimestamp="2026-02-28 13:44:00 +0000 UTC" firstStartedPulling="2026-02-28 13:44:00.985790564 +0000 UTC m=+1655.228111231" lastFinishedPulling="2026-02-28 13:44:42.464957463 +0000 UTC m=+1696.707278130" observedRunningTime="2026-02-28 13:44:42.9254586 +0000 UTC m=+1697.167779267" watchObservedRunningTime="2026-02-28 13:44:42.932978796 +0000 UTC m=+1697.175299453" Feb 28 13:44:43 crc kubenswrapper[4897]: I0228 13:44:43.924196 4897 generic.go:334] "Generic (PLEG): container finished" podID="68beed07-352c-43c3-9750-f5e63fcab99a" containerID="9184d269b372b09e0171b692c1bf6fcf54eaec01988d7d16e77be0f91d908227" exitCode=0 Feb 28 13:44:43 crc kubenswrapper[4897]: I0228 13:44:43.924285 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538104-knc25" event={"ID":"68beed07-352c-43c3-9750-f5e63fcab99a","Type":"ContainerDied","Data":"9184d269b372b09e0171b692c1bf6fcf54eaec01988d7d16e77be0f91d908227"} Feb 28 13:44:45 crc kubenswrapper[4897]: I0228 13:44:45.412817 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538104-knc25" Feb 28 13:44:45 crc kubenswrapper[4897]: I0228 13:44:45.475025 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlrnk\" (UniqueName: \"kubernetes.io/projected/68beed07-352c-43c3-9750-f5e63fcab99a-kube-api-access-hlrnk\") pod \"68beed07-352c-43c3-9750-f5e63fcab99a\" (UID: \"68beed07-352c-43c3-9750-f5e63fcab99a\") " Feb 28 13:44:45 crc kubenswrapper[4897]: I0228 13:44:45.484022 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68beed07-352c-43c3-9750-f5e63fcab99a-kube-api-access-hlrnk" (OuterVolumeSpecName: "kube-api-access-hlrnk") pod "68beed07-352c-43c3-9750-f5e63fcab99a" (UID: "68beed07-352c-43c3-9750-f5e63fcab99a"). InnerVolumeSpecName "kube-api-access-hlrnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:44:45 crc kubenswrapper[4897]: I0228 13:44:45.578171 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlrnk\" (UniqueName: \"kubernetes.io/projected/68beed07-352c-43c3-9750-f5e63fcab99a-kube-api-access-hlrnk\") on node \"crc\" DevicePath \"\"" Feb 28 13:44:45 crc kubenswrapper[4897]: I0228 13:44:45.951289 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538104-knc25" event={"ID":"68beed07-352c-43c3-9750-f5e63fcab99a","Type":"ContainerDied","Data":"4a16cf11c335907dccedc29c1b848cce8f49704c25ee4cbb45e97a9494441a31"} Feb 28 13:44:45 crc kubenswrapper[4897]: I0228 13:44:45.951819 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a16cf11c335907dccedc29c1b848cce8f49704c25ee4cbb45e97a9494441a31" Feb 28 13:44:45 crc kubenswrapper[4897]: I0228 13:44:45.951767 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538104-knc25" Feb 28 13:44:46 crc kubenswrapper[4897]: I0228 13:44:46.003533 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538094-cz9s2"] Feb 28 13:44:46 crc kubenswrapper[4897]: I0228 13:44:46.011024 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538094-cz9s2"] Feb 28 13:44:46 crc kubenswrapper[4897]: E0228 13:44:46.476804 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:44:46 crc kubenswrapper[4897]: I0228 13:44:46.477637 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db157ff1-ece6-4751-8cb5-89e894c98fae" path="/var/lib/kubelet/pods/db157ff1-ece6-4751-8cb5-89e894c98fae/volumes" Feb 28 13:44:47 crc kubenswrapper[4897]: E0228 13:44:47.458671 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:44:47 crc kubenswrapper[4897]: E0228 13:44:47.459025 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:44:54 crc kubenswrapper[4897]: E0228 13:44:54.459578 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:44:56 crc kubenswrapper[4897]: I0228 13:44:56.463445 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:44:56 crc kubenswrapper[4897]: E0228 13:44:56.464067 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:44:58 crc kubenswrapper[4897]: E0228 13:44:58.459654 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:44:58 crc kubenswrapper[4897]: E0228 13:44:58.459712 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.167980 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc"] Feb 28 13:45:00 crc kubenswrapper[4897]: E0228 13:45:00.168461 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerName="extract-content" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.168477 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerName="extract-content" Feb 28 13:45:00 crc kubenswrapper[4897]: E0228 13:45:00.168491 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68beed07-352c-43c3-9750-f5e63fcab99a" containerName="oc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.168497 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="68beed07-352c-43c3-9750-f5e63fcab99a" containerName="oc" Feb 28 13:45:00 crc kubenswrapper[4897]: E0228 13:45:00.168509 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerName="registry-server" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.168515 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerName="registry-server" Feb 28 13:45:00 crc kubenswrapper[4897]: E0228 13:45:00.168533 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerName="extract-utilities" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.168540 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerName="extract-utilities" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.168735 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ed3d2c-1f54-445d-8e0d-1908ac9e03c7" containerName="registry-server" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.168784 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="68beed07-352c-43c3-9750-f5e63fcab99a" containerName="oc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.169572 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.172215 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.172369 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.197243 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc"] Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.331773 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-secret-volume\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.331943 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-config-volume\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.332687 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbk4\" (UniqueName: \"kubernetes.io/projected/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-kube-api-access-zzbk4\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.434845 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbk4\" (UniqueName: \"kubernetes.io/projected/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-kube-api-access-zzbk4\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.434892 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-secret-volume\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.434939 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-config-volume\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.436518 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-config-volume\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.451871 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-secret-volume\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.456544 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbk4\" (UniqueName: \"kubernetes.io/projected/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-kube-api-access-zzbk4\") pod \"collect-profiles-29538105-hrdzc\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.495196 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:00 crc kubenswrapper[4897]: I0228 13:45:00.953502 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc"] Feb 28 13:45:01 crc kubenswrapper[4897]: I0228 13:45:01.151980 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" event={"ID":"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7","Type":"ContainerStarted","Data":"4a7060b99f3cb9b1a6ae63e82ab09722a2b6f8a1e28eb41457efa2b39dcdedc9"} Feb 28 13:45:02 crc kubenswrapper[4897]: I0228 13:45:02.164995 4897 generic.go:334] "Generic (PLEG): container finished" podID="8caf2334-e6eb-4ddd-a189-8fc52e0d07b7" containerID="f90e4d828e4817ba4cb45d75eb902c51503dd8eb932a81cad24635662a4fd9c6" exitCode=0 Feb 28 13:45:02 crc kubenswrapper[4897]: I0228 13:45:02.165061 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" event={"ID":"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7","Type":"ContainerDied","Data":"f90e4d828e4817ba4cb45d75eb902c51503dd8eb932a81cad24635662a4fd9c6"} Feb 28 13:45:02 crc kubenswrapper[4897]: E0228 13:45:02.458275 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.183068 4897 generic.go:334] "Generic (PLEG): container finished" podID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" containerID="bfff181b7f363e376980d3482eba267679c8835d63343d7617ad5185eb52f007" exitCode=0 Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.183188 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538102-l46vx" event={"ID":"c0fa54bd-caa0-4a38-a45b-a5e6646e3843","Type":"ContainerDied","Data":"bfff181b7f363e376980d3482eba267679c8835d63343d7617ad5185eb52f007"} Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.631134 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.717436 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzbk4\" (UniqueName: \"kubernetes.io/projected/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-kube-api-access-zzbk4\") pod \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.717619 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-config-volume\") pod \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.717747 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-secret-volume\") pod \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\" (UID: \"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7\") " Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.718683 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "8caf2334-e6eb-4ddd-a189-8fc52e0d07b7" (UID: "8caf2334-e6eb-4ddd-a189-8fc52e0d07b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.726758 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-kube-api-access-zzbk4" (OuterVolumeSpecName: "kube-api-access-zzbk4") pod "8caf2334-e6eb-4ddd-a189-8fc52e0d07b7" (UID: "8caf2334-e6eb-4ddd-a189-8fc52e0d07b7"). InnerVolumeSpecName "kube-api-access-zzbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.726935 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8caf2334-e6eb-4ddd-a189-8fc52e0d07b7" (UID: "8caf2334-e6eb-4ddd-a189-8fc52e0d07b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.820506 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzbk4\" (UniqueName: \"kubernetes.io/projected/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-kube-api-access-zzbk4\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.820564 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:03 crc kubenswrapper[4897]: I0228 13:45:03.820587 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:04 crc kubenswrapper[4897]: I0228 13:45:04.199974 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" event={"ID":"8caf2334-e6eb-4ddd-a189-8fc52e0d07b7","Type":"ContainerDied","Data":"4a7060b99f3cb9b1a6ae63e82ab09722a2b6f8a1e28eb41457efa2b39dcdedc9"} Feb 28 13:45:04 crc kubenswrapper[4897]: I0228 13:45:04.200036 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc" Feb 28 13:45:04 crc kubenswrapper[4897]: I0228 13:45:04.200043 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7060b99f3cb9b1a6ae63e82ab09722a2b6f8a1e28eb41457efa2b39dcdedc9" Feb 28 13:45:04 crc kubenswrapper[4897]: I0228 13:45:04.640466 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538102-l46vx" Feb 28 13:45:04 crc kubenswrapper[4897]: I0228 13:45:04.739674 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67lnl\" (UniqueName: \"kubernetes.io/projected/c0fa54bd-caa0-4a38-a45b-a5e6646e3843-kube-api-access-67lnl\") pod \"c0fa54bd-caa0-4a38-a45b-a5e6646e3843\" (UID: \"c0fa54bd-caa0-4a38-a45b-a5e6646e3843\") " Feb 28 13:45:04 crc kubenswrapper[4897]: I0228 13:45:04.747452 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fa54bd-caa0-4a38-a45b-a5e6646e3843-kube-api-access-67lnl" (OuterVolumeSpecName: "kube-api-access-67lnl") pod "c0fa54bd-caa0-4a38-a45b-a5e6646e3843" (UID: "c0fa54bd-caa0-4a38-a45b-a5e6646e3843"). InnerVolumeSpecName "kube-api-access-67lnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:45:04 crc kubenswrapper[4897]: I0228 13:45:04.842359 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67lnl\" (UniqueName: \"kubernetes.io/projected/c0fa54bd-caa0-4a38-a45b-a5e6646e3843-kube-api-access-67lnl\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:05 crc kubenswrapper[4897]: I0228 13:45:05.216816 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538102-l46vx" event={"ID":"c0fa54bd-caa0-4a38-a45b-a5e6646e3843","Type":"ContainerDied","Data":"32c34565f8ef1d2e83d3e5599da55b1a6f7b2bda4a278a44f03d93d905d9ccdb"} Feb 28 13:45:05 crc kubenswrapper[4897]: I0228 13:45:05.216874 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c34565f8ef1d2e83d3e5599da55b1a6f7b2bda4a278a44f03d93d905d9ccdb" Feb 28 13:45:05 crc kubenswrapper[4897]: I0228 13:45:05.216918 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538102-l46vx" Feb 28 13:45:05 crc kubenswrapper[4897]: I0228 13:45:05.759527 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538096-ws9qt"] Feb 28 13:45:05 crc kubenswrapper[4897]: I0228 13:45:05.778112 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538096-ws9qt"] Feb 28 13:45:06 crc kubenswrapper[4897]: I0228 13:45:06.475002 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e94c0b2-21a6-496c-8188-dfcaf0d66b2b" path="/var/lib/kubelet/pods/6e94c0b2-21a6-496c-8188-dfcaf0d66b2b/volumes" Feb 28 13:45:07 crc kubenswrapper[4897]: I0228 13:45:07.456675 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:45:07 crc kubenswrapper[4897]: E0228 13:45:07.457485 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:45:07 crc kubenswrapper[4897]: E0228 13:45:07.459466 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:45:12 crc kubenswrapper[4897]: E0228 13:45:12.462790 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:45:12 crc kubenswrapper[4897]: E0228 13:45:12.462801 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:45:13 crc kubenswrapper[4897]: E0228 13:45:13.458866 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:45:20 crc kubenswrapper[4897]: E0228 13:45:20.460522 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:45:21 crc kubenswrapper[4897]: I0228 13:45:21.456796 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:45:21 crc kubenswrapper[4897]: E0228 13:45:21.457571 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:45:23 crc kubenswrapper[4897]: E0228 13:45:23.459245 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:45:26 crc kubenswrapper[4897]: E0228 13:45:26.471464 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" Feb 28 13:45:28 crc kubenswrapper[4897]: E0228 13:45:28.458873 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:45:29 crc kubenswrapper[4897]: I0228 13:45:29.540540 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerStarted","Data":"d3da4a47439bff179f18914d82f12cbf80e94acd2abdda29adfbe106b9a1bf02"} Feb 28 13:45:29 crc kubenswrapper[4897]: I0228 13:45:29.596732 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.028885562 podStartE2EDuration="9m27.596709732s" podCreationTimestamp="2026-02-28 13:36:02 +0000 UTC" firstStartedPulling="2026-02-28 13:36:15.024558553 +0000 UTC m=+1189.266879210" lastFinishedPulling="2026-02-28 13:45:28.592382713 +0000 UTC m=+1742.834703380" observedRunningTime="2026-02-28 13:45:29.583837902 +0000 UTC m=+1743.826158639" watchObservedRunningTime="2026-02-28 13:45:29.596709732 +0000 UTC m=+1743.839030399" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.877272 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.877700 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="2910518a-9b98-499b-a132-954899d270c0" containerName="openstackclient" containerID="cri-o://3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38" gracePeriod=2 Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.891068 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.917060 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 28 13:45:30 crc kubenswrapper[4897]: E0228 13:45:30.919423 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" containerName="oc" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.919524 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" containerName="oc" Feb 28 13:45:30 crc kubenswrapper[4897]: E0228 13:45:30.919591 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2910518a-9b98-499b-a132-954899d270c0" containerName="openstackclient" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.919644 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2910518a-9b98-499b-a132-954899d270c0" containerName="openstackclient" Feb 28 13:45:30 crc kubenswrapper[4897]: E0228 13:45:30.919711 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8caf2334-e6eb-4ddd-a189-8fc52e0d07b7" containerName="collect-profiles" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.919764 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8caf2334-e6eb-4ddd-a189-8fc52e0d07b7" containerName="collect-profiles" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.920003 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2910518a-9b98-499b-a132-954899d270c0" containerName="openstackclient" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.920064 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" containerName="oc" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.920122 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8caf2334-e6eb-4ddd-a189-8fc52e0d07b7" containerName="collect-profiles" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.920797 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.925172 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2910518a-9b98-499b-a132-954899d270c0" podUID="768007b3-82d1-4b63-b96f-4d8797b46acc" Feb 28 13:45:30 crc kubenswrapper[4897]: I0228 13:45:30.945382 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.054292 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.054874 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="f645316a-2073-4db9-8ff9-a0af2afc7104" containerName="watcher-decision-engine" containerID="cri-o://38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc" gracePeriod=30 Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.066632 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.066921 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerName="watcher-api-log" containerID="cri-o://49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61" gracePeriod=30 Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.067001 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerName="watcher-api" containerID="cri-o://5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066" gracePeriod=30 Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.069805 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/768007b3-82d1-4b63-b96f-4d8797b46acc-openstack-config-secret\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.069927 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768007b3-82d1-4b63-b96f-4d8797b46acc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.070057 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/768007b3-82d1-4b63-b96f-4d8797b46acc-openstack-config\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.070098 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcpp\" (UniqueName: \"kubernetes.io/projected/768007b3-82d1-4b63-b96f-4d8797b46acc-kube-api-access-lvcpp\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.171896 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768007b3-82d1-4b63-b96f-4d8797b46acc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.172071 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/768007b3-82d1-4b63-b96f-4d8797b46acc-openstack-config\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.172121 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvcpp\" (UniqueName: \"kubernetes.io/projected/768007b3-82d1-4b63-b96f-4d8797b46acc-kube-api-access-lvcpp\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.172149 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/768007b3-82d1-4b63-b96f-4d8797b46acc-openstack-config-secret\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.173118 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/768007b3-82d1-4b63-b96f-4d8797b46acc-openstack-config\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.178259 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/768007b3-82d1-4b63-b96f-4d8797b46acc-openstack-config-secret\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.184737 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768007b3-82d1-4b63-b96f-4d8797b46acc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.198508 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvcpp\" (UniqueName: \"kubernetes.io/projected/768007b3-82d1-4b63-b96f-4d8797b46acc-kube-api-access-lvcpp\") pod \"openstackclient\" (UID: \"768007b3-82d1-4b63-b96f-4d8797b46acc\") " pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.256377 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.568223 4897 generic.go:334] "Generic (PLEG): container finished" podID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerID="49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61" exitCode=143 Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.568266 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"759c2685-a508-4824-9e22-1c18ca2e75ca","Type":"ContainerDied","Data":"49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61"} Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.842126 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 13:45:31 crc kubenswrapper[4897]: W0228 13:45:31.846806 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768007b3_82d1_4b63_b96f_4d8797b46acc.slice/crio-5babe0573633ee88f30350c609cd9b81533b08703ee4826f112fbc06cb5d1f86 WatchSource:0}: Error finding container 5babe0573633ee88f30350c609cd9b81533b08703ee4826f112fbc06cb5d1f86: Status 404 returned error can't find the container with id 5babe0573633ee88f30350c609cd9b81533b08703ee4826f112fbc06cb5d1f86 Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.955232 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.955878 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="prometheus" containerID="cri-o://a84dbcf65ebf45a8e0a4cbb472d0d5147e3deb6bb67a494f9bf8476492e208d2" gracePeriod=600 Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.955891 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="thanos-sidecar" containerID="cri-o://d3da4a47439bff179f18914d82f12cbf80e94acd2abdda29adfbe106b9a1bf02" gracePeriod=600 Feb 28 13:45:31 crc kubenswrapper[4897]: I0228 13:45:31.956033 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="config-reloader" containerID="cri-o://b6cdc38b6b85a1ccc08dddd754259c21e0a6f7f4b71c260e1be20477686e93e8" gracePeriod=600 Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.351884 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.503987 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/759c2685-a508-4824-9e22-1c18ca2e75ca-logs\") pod \"759c2685-a508-4824-9e22-1c18ca2e75ca\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.504082 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-config-data\") pod \"759c2685-a508-4824-9e22-1c18ca2e75ca\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.504108 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pchj6\" (UniqueName: \"kubernetes.io/projected/759c2685-a508-4824-9e22-1c18ca2e75ca-kube-api-access-pchj6\") pod \"759c2685-a508-4824-9e22-1c18ca2e75ca\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.504138 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-public-tls-certs\") pod \"759c2685-a508-4824-9e22-1c18ca2e75ca\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.504171 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-internal-tls-certs\") pod \"759c2685-a508-4824-9e22-1c18ca2e75ca\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.504298 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-combined-ca-bundle\") pod \"759c2685-a508-4824-9e22-1c18ca2e75ca\" (UID: \"759c2685-a508-4824-9e22-1c18ca2e75ca\") " Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.505071 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/759c2685-a508-4824-9e22-1c18ca2e75ca-logs" (OuterVolumeSpecName: "logs") pod "759c2685-a508-4824-9e22-1c18ca2e75ca" (UID: "759c2685-a508-4824-9e22-1c18ca2e75ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.508492 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759c2685-a508-4824-9e22-1c18ca2e75ca-kube-api-access-pchj6" (OuterVolumeSpecName: "kube-api-access-pchj6") pod "759c2685-a508-4824-9e22-1c18ca2e75ca" (UID: "759c2685-a508-4824-9e22-1c18ca2e75ca"). InnerVolumeSpecName "kube-api-access-pchj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.548353 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "759c2685-a508-4824-9e22-1c18ca2e75ca" (UID: "759c2685-a508-4824-9e22-1c18ca2e75ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.557570 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "759c2685-a508-4824-9e22-1c18ca2e75ca" (UID: "759c2685-a508-4824-9e22-1c18ca2e75ca"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.562014 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-config-data" (OuterVolumeSpecName: "config-data") pod "759c2685-a508-4824-9e22-1c18ca2e75ca" (UID: "759c2685-a508-4824-9e22-1c18ca2e75ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.574236 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "759c2685-a508-4824-9e22-1c18ca2e75ca" (UID: "759c2685-a508-4824-9e22-1c18ca2e75ca"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.583638 4897 generic.go:334] "Generic (PLEG): container finished" podID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerID="5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066" exitCode=0 Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.583711 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"759c2685-a508-4824-9e22-1c18ca2e75ca","Type":"ContainerDied","Data":"5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066"} Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.583744 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"759c2685-a508-4824-9e22-1c18ca2e75ca","Type":"ContainerDied","Data":"a03f4deec48cbae51eb34537b59f32e818350e850a8858319d3f80f238b54680"} Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.583760 4897 scope.go:117] "RemoveContainer" containerID="5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.584266 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.586032 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"768007b3-82d1-4b63-b96f-4d8797b46acc","Type":"ContainerStarted","Data":"a8de17ccd063f19c66260b7782d39da0dadcb0c13bf83061f608f934cb297942"} Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.586096 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"768007b3-82d1-4b63-b96f-4d8797b46acc","Type":"ContainerStarted","Data":"5babe0573633ee88f30350c609cd9b81533b08703ee4826f112fbc06cb5d1f86"} Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.593789 4897 generic.go:334] "Generic (PLEG): container finished" podID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerID="d3da4a47439bff179f18914d82f12cbf80e94acd2abdda29adfbe106b9a1bf02" exitCode=0 Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.593822 4897 generic.go:334] "Generic (PLEG): container finished" podID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerID="a84dbcf65ebf45a8e0a4cbb472d0d5147e3deb6bb67a494f9bf8476492e208d2" exitCode=0 Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.593844 4897 generic.go:334] "Generic (PLEG): container finished" podID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerID="b6cdc38b6b85a1ccc08dddd754259c21e0a6f7f4b71c260e1be20477686e93e8" exitCode=0 Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.593843 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerDied","Data":"d3da4a47439bff179f18914d82f12cbf80e94acd2abdda29adfbe106b9a1bf02"} Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.594025 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerDied","Data":"a84dbcf65ebf45a8e0a4cbb472d0d5147e3deb6bb67a494f9bf8476492e208d2"} Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.594054 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerDied","Data":"b6cdc38b6b85a1ccc08dddd754259c21e0a6f7f4b71c260e1be20477686e93e8"} Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.606316 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/759c2685-a508-4824-9e22-1c18ca2e75ca-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.606346 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.606357 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pchj6\" (UniqueName: \"kubernetes.io/projected/759c2685-a508-4824-9e22-1c18ca2e75ca-kube-api-access-pchj6\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.606366 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.606378 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.606385 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759c2685-a508-4824-9e22-1c18ca2e75ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.613688 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.6136677329999998 podStartE2EDuration="2.613667733s" podCreationTimestamp="2026-02-28 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:45:32.605168659 +0000 UTC m=+1746.847489326" watchObservedRunningTime="2026-02-28 13:45:32.613667733 +0000 UTC m=+1746.855988390" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.663272 4897 scope.go:117] "RemoveContainer" containerID="49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.672558 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.685920 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.697445 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:45:32 crc kubenswrapper[4897]: E0228 13:45:32.698259 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerName="watcher-api" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.698302 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerName="watcher-api" Feb 28 13:45:32 crc kubenswrapper[4897]: E0228 13:45:32.698351 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerName="watcher-api-log" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.698360 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerName="watcher-api-log" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.698741 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerName="watcher-api-log" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.698772 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" containerName="watcher-api" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.700232 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.702252 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.703246 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.704189 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.706010 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.715225 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.717072 4897 scope.go:117] "RemoveContainer" containerID="5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066" Feb 28 13:45:32 crc kubenswrapper[4897]: E0228 13:45:32.725709 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066\": container with ID starting with 5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066 not found: ID does not exist" containerID="5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.725748 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066"} err="failed to get container status \"5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066\": rpc error: code = NotFound desc = could not find container \"5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066\": container with ID starting with 5e432dd33d9abefb43c0529e3079d0ff37436c9e5fd9e5e6268a9afd1be6a066 not found: ID does not exist" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.725775 4897 scope.go:117] "RemoveContainer" containerID="49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61" Feb 28 13:45:32 crc kubenswrapper[4897]: E0228 13:45:32.726193 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61\": container with ID starting with 49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61 not found: ID does not exist" containerID="49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.726230 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61"} err="failed to get container status \"49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61\": rpc error: code = NotFound desc = could not find container \"49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61\": container with ID starting with 49a370aa792694cbe21feace035e04f4cddd95861466ed9ec8543fed4bd7bd61 not found: ID does not exist" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.815387 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56cj9\" (UniqueName: \"kubernetes.io/projected/f7a66d06-fda4-4801-8a7e-24acf64224ac-kube-api-access-56cj9\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.815814 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.815895 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.815944 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-public-tls-certs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.816143 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.816365 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a66d06-fda4-4801-8a7e-24acf64224ac-logs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.816594 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-config-data\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.918339 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56cj9\" (UniqueName: \"kubernetes.io/projected/f7a66d06-fda4-4801-8a7e-24acf64224ac-kube-api-access-56cj9\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.918472 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.918519 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.918547 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-public-tls-certs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.918586 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.918633 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a66d06-fda4-4801-8a7e-24acf64224ac-logs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.918718 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-config-data\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.920163 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a66d06-fda4-4801-8a7e-24acf64224ac-logs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.925611 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.935304 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.935799 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.942897 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56cj9\" (UniqueName: \"kubernetes.io/projected/f7a66d06-fda4-4801-8a7e-24acf64224ac-kube-api-access-56cj9\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.947461 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-public-tls-certs\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:32 crc kubenswrapper[4897]: I0228 13:45:32.947852 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a66d06-fda4-4801-8a7e-24acf64224ac-config-data\") pod \"watcher-api-0\" (UID: \"f7a66d06-fda4-4801-8a7e-24acf64224ac\") " pod="openstack/watcher-api-0" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.022043 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.160924 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.166550 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.327620 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-tls-assets\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.327871 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-openstack-config-secret\") pod \"2910518a-9b98-499b-a132-954899d270c0\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.327927 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-thanos-prometheus-http-client-file\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.327965 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-0\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.328018 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2910518a-9b98-499b-a132-954899d270c0-openstack-config\") pod \"2910518a-9b98-499b-a132-954899d270c0\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.328085 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config-out\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.328114 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zd84\" (UniqueName: \"kubernetes.io/projected/2910518a-9b98-499b-a132-954899d270c0-kube-api-access-7zd84\") pod \"2910518a-9b98-499b-a132-954899d270c0\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.328133 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-2\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.328989 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.329019 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-1\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.329084 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-combined-ca-bundle\") pod \"2910518a-9b98-499b-a132-954899d270c0\" (UID: \"2910518a-9b98-499b-a132-954899d270c0\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.329187 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr856\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-kube-api-access-fr856\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.329209 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.329229 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-web-config\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.332738 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.333467 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.335917 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.341406 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.342949 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config" (OuterVolumeSpecName: "config") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.343496 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-kube-api-access-fr856" (OuterVolumeSpecName: "kube-api-access-fr856") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "kube-api-access-fr856". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.345572 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2910518a-9b98-499b-a132-954899d270c0-kube-api-access-7zd84" (OuterVolumeSpecName: "kube-api-access-7zd84") pod "2910518a-9b98-499b-a132-954899d270c0" (UID: "2910518a-9b98-499b-a132-954899d270c0"). InnerVolumeSpecName "kube-api-access-7zd84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.349284 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.355087 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config-out" (OuterVolumeSpecName: "config-out") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.365325 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9 podName:7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6 nodeName:}" failed. No retries permitted until 2026-02-28 13:45:33.865283123 +0000 UTC m=+1748.107603780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "prometheus-metric-storage-db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.370385 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-web-config" (OuterVolumeSpecName: "web-config") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.392661 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2910518a-9b98-499b-a132-954899d270c0" (UID: "2910518a-9b98-499b-a132-954899d270c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.393173 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2910518a-9b98-499b-a132-954899d270c0" (UID: "2910518a-9b98-499b-a132-954899d270c0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.402191 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2910518a-9b98-499b-a132-954899d270c0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2910518a-9b98-499b-a132-954899d270c0" (UID: "2910518a-9b98-499b-a132-954899d270c0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434207 4897 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config-out\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434235 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zd84\" (UniqueName: \"kubernetes.io/projected/2910518a-9b98-499b-a132-954899d270c0-kube-api-access-7zd84\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434246 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434255 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434273 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434281 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr856\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-kube-api-access-fr856\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434290 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434298 4897 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-web-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434318 4897 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434326 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2910518a-9b98-499b-a132-954899d270c0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434335 4897 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434343 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.434351 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2910518a-9b98-499b-a132-954899d270c0-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.458115 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.510065 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.577043 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.622908 4897 generic.go:334] "Generic (PLEG): container finished" podID="f645316a-2073-4db9-8ff9-a0af2afc7104" containerID="38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc" exitCode=0 Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.622977 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f645316a-2073-4db9-8ff9-a0af2afc7104","Type":"ContainerDied","Data":"38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc"} Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.623006 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f645316a-2073-4db9-8ff9-a0af2afc7104","Type":"ContainerDied","Data":"f2715d0948bf55e56abfae968054d9b0a3d5f30af0b9cea5e87d0a6011fd7863"} Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.623023 4897 scope.go:117] "RemoveContainer" containerID="38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.622980 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.630625 4897 generic.go:334] "Generic (PLEG): container finished" podID="2910518a-9b98-499b-a132-954899d270c0" containerID="3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38" exitCode=137 Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.630910 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.636884 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6","Type":"ContainerDied","Data":"17a241890ba78ab776ed7e52d9645bfb5c1ca9256d62946e6a83b55633398a72"} Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.636970 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.638415 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f7a66d06-fda4-4801-8a7e-24acf64224ac","Type":"ContainerStarted","Data":"dd959008664dbd943be3f13507fe70c12946000143f9daf1d72dc4bf7aeecddd"} Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.654846 4897 scope.go:117] "RemoveContainer" containerID="38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc" Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.657021 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc\": container with ID starting with 38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc not found: ID does not exist" containerID="38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.657079 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc"} err="failed to get container status \"38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc\": rpc error: code = NotFound desc = could not find container \"38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc\": container with ID starting with 38869235cd57b69ff6ba0b041916342de0e6443d96b2341a79ac08dd2c453fcc not found: ID does not exist" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.657113 4897 scope.go:117] "RemoveContainer" containerID="3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.657292 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2910518a-9b98-499b-a132-954899d270c0" podUID="768007b3-82d1-4b63-b96f-4d8797b46acc" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.679657 4897 scope.go:117] "RemoveContainer" containerID="3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38" Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.680129 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38\": container with ID starting with 3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38 not found: ID does not exist" containerID="3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.680175 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38"} err="failed to get container status \"3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38\": rpc error: code = NotFound desc = could not find container \"3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38\": container with ID starting with 3e4a765b116862f4628ed18d1048b86186dfdf89b6120f1d6a290f01bc622a38 not found: ID does not exist" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.680204 4897 scope.go:117] "RemoveContainer" containerID="d3da4a47439bff179f18914d82f12cbf80e94acd2abdda29adfbe106b9a1bf02" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.701562 4897 scope.go:117] "RemoveContainer" containerID="a84dbcf65ebf45a8e0a4cbb472d0d5147e3deb6bb67a494f9bf8476492e208d2" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.719614 4897 scope.go:117] "RemoveContainer" containerID="b6cdc38b6b85a1ccc08dddd754259c21e0a6f7f4b71c260e1be20477686e93e8" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.737021 4897 scope.go:117] "RemoveContainer" containerID="c79d8a8e4035f7ad944a9279ea16c6a346d115234f441fbc2b9c154734a097d5" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.740669 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-config-data\") pod \"f645316a-2073-4db9-8ff9-a0af2afc7104\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.740755 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645316a-2073-4db9-8ff9-a0af2afc7104-logs\") pod \"f645316a-2073-4db9-8ff9-a0af2afc7104\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.740798 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-combined-ca-bundle\") pod \"f645316a-2073-4db9-8ff9-a0af2afc7104\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.740965 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87t6m\" (UniqueName: \"kubernetes.io/projected/f645316a-2073-4db9-8ff9-a0af2afc7104-kube-api-access-87t6m\") pod \"f645316a-2073-4db9-8ff9-a0af2afc7104\" (UID: \"f645316a-2073-4db9-8ff9-a0af2afc7104\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.741338 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f645316a-2073-4db9-8ff9-a0af2afc7104-logs" (OuterVolumeSpecName: "logs") pod "f645316a-2073-4db9-8ff9-a0af2afc7104" (UID: "f645316a-2073-4db9-8ff9-a0af2afc7104"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.741720 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645316a-2073-4db9-8ff9-a0af2afc7104-logs\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.746064 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f645316a-2073-4db9-8ff9-a0af2afc7104-kube-api-access-87t6m" (OuterVolumeSpecName: "kube-api-access-87t6m") pod "f645316a-2073-4db9-8ff9-a0af2afc7104" (UID: "f645316a-2073-4db9-8ff9-a0af2afc7104"). InnerVolumeSpecName "kube-api-access-87t6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.767070 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f645316a-2073-4db9-8ff9-a0af2afc7104" (UID: "f645316a-2073-4db9-8ff9-a0af2afc7104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.806306 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-config-data" (OuterVolumeSpecName: "config-data") pod "f645316a-2073-4db9-8ff9-a0af2afc7104" (UID: "f645316a-2073-4db9-8ff9-a0af2afc7104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.844243 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.844292 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87t6m\" (UniqueName: \"kubernetes.io/projected/f645316a-2073-4db9-8ff9-a0af2afc7104-kube-api-access-87t6m\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.844323 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645316a-2073-4db9-8ff9-a0af2afc7104-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.946007 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\" (UID: \"7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6\") " Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.962654 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.973183 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.979977 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" (UID: "7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6"). InnerVolumeSpecName "pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.992522 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.993048 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f645316a-2073-4db9-8ff9-a0af2afc7104" containerName="watcher-decision-engine" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993071 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f645316a-2073-4db9-8ff9-a0af2afc7104" containerName="watcher-decision-engine" Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.993103 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="init-config-reloader" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993112 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="init-config-reloader" Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.993128 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="thanos-sidecar" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993135 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="thanos-sidecar" Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.993158 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="config-reloader" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993165 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="config-reloader" Feb 28 13:45:33 crc kubenswrapper[4897]: E0228 13:45:33.993176 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="prometheus" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993183 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="prometheus" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993429 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="thanos-sidecar" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993459 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f645316a-2073-4db9-8ff9-a0af2afc7104" containerName="watcher-decision-engine" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993475 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="config-reloader" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.993485 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" containerName="prometheus" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.994373 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:45:33 crc kubenswrapper[4897]: I0228 13:45:33.996774 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.006281 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.048301 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") on node \"crc\" " Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.082384 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.083641 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.083810 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9") on node "crc" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.094779 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.111849 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.115043 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.117797 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.118193 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.118429 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.118742 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.118919 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.119111 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.120407 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-6zn4s" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.141154 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.142141 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.151582 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmrh\" (UniqueName: \"kubernetes.io/projected/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-kube-api-access-thmrh\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.151892 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-logs\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.151982 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.152046 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.152108 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.152197 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.253642 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254008 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254059 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254089 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254141 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-logs\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254197 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254231 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254257 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254296 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254343 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254368 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254417 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254447 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254471 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmrh\" (UniqueName: \"kubernetes.io/projected/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-kube-api-access-thmrh\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254498 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgxrr\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-kube-api-access-kgxrr\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254527 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254565 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.254610 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.255474 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-logs\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.261104 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.263944 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.263964 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.284933 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmrh\" (UniqueName: \"kubernetes.io/projected/f31b98f7-e894-4ba1-99d0-c9f4dfe066a9-kube-api-access-thmrh\") pod \"watcher-decision-engine-0\" (UID: \"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9\") " pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.343731 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356267 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356336 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356375 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356394 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356444 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356465 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356499 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356517 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356549 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356568 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgxrr\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-kube-api-access-kgxrr\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356589 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356615 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.356642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.357115 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.358402 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.361678 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.361897 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.361939 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.362200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.364120 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.367454 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.368290 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.368535 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.368577 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de9fbdfeb629ec9e72fb17ffcc3a651e10bfb0662587d0069f50b747406f5447/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.370537 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.370591 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.380086 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgxrr\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-kube-api-access-kgxrr\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.427602 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.451820 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.458152 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:45:34 crc kubenswrapper[4897]: E0228 13:45:34.458403 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.466679 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2910518a-9b98-499b-a132-954899d270c0" path="/var/lib/kubelet/pods/2910518a-9b98-499b-a132-954899d270c0/volumes" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.467351 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="759c2685-a508-4824-9e22-1c18ca2e75ca" path="/var/lib/kubelet/pods/759c2685-a508-4824-9e22-1c18ca2e75ca/volumes" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.467992 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6" path="/var/lib/kubelet/pods/7ed1e6c8-c823-4fd1-ab0d-5460b6024cd6/volumes" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.469157 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f645316a-2073-4db9-8ff9-a0af2afc7104" path="/var/lib/kubelet/pods/f645316a-2073-4db9-8ff9-a0af2afc7104/volumes" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.652537 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f7a66d06-fda4-4801-8a7e-24acf64224ac","Type":"ContainerStarted","Data":"200c4bd68a6e48be47fa7b845a8448bafb5d41eb9af47cea4fde71d91e8365d3"} Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.652922 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f7a66d06-fda4-4801-8a7e-24acf64224ac","Type":"ContainerStarted","Data":"d891ea6a843271e45a22f4f9603541ed038e8161e02ea9767a39616b74fcff82"} Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.653046 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.685233 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.685215791 podStartE2EDuration="2.685215791s" podCreationTimestamp="2026-02-28 13:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:45:34.675578264 +0000 UTC m=+1748.917898921" watchObservedRunningTime="2026-02-28 13:45:34.685215791 +0000 UTC m=+1748.927536448" Feb 28 13:45:34 crc kubenswrapper[4897]: W0228 13:45:34.806660 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31b98f7_e894_4ba1_99d0_c9f4dfe066a9.slice/crio-69c5585ef0cdbf879b21521a4542c89e6c5082c8dbc4a2f3422f3ac9c7fd4582 WatchSource:0}: Error finding container 69c5585ef0cdbf879b21521a4542c89e6c5082c8dbc4a2f3422f3ac9c7fd4582: Status 404 returned error can't find the container with id 69c5585ef0cdbf879b21521a4542c89e6c5082c8dbc4a2f3422f3ac9c7fd4582 Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.809150 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 28 13:45:34 crc kubenswrapper[4897]: I0228 13:45:34.920095 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 13:45:34 crc kubenswrapper[4897]: W0228 13:45:34.935179 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded5ef2f7_8287_429b_ba57_6ade31e8e43c.slice/crio-67367fc3a6f47595fa668891a2b8427022e55b26c109b9f3e639defeda2919f6 WatchSource:0}: Error finding container 67367fc3a6f47595fa668891a2b8427022e55b26c109b9f3e639defeda2919f6: Status 404 returned error can't find the container with id 67367fc3a6f47595fa668891a2b8427022e55b26c109b9f3e639defeda2919f6 Feb 28 13:45:35 crc kubenswrapper[4897]: I0228 13:45:35.669129 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerStarted","Data":"67367fc3a6f47595fa668891a2b8427022e55b26c109b9f3e639defeda2919f6"} Feb 28 13:45:35 crc kubenswrapper[4897]: I0228 13:45:35.674449 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9","Type":"ContainerStarted","Data":"3fa6b327b807f7b4ade170f4fd320339da68d5c7ceb23d947d103203f7d65e35"} Feb 28 13:45:35 crc kubenswrapper[4897]: I0228 13:45:35.674515 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f31b98f7-e894-4ba1-99d0-c9f4dfe066a9","Type":"ContainerStarted","Data":"69c5585ef0cdbf879b21521a4542c89e6c5082c8dbc4a2f3422f3ac9c7fd4582"} Feb 28 13:45:35 crc kubenswrapper[4897]: I0228 13:45:35.708637 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.7086127380000002 podStartE2EDuration="2.708612738s" podCreationTimestamp="2026-02-28 13:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:45:35.697372694 +0000 UTC m=+1749.939693411" watchObservedRunningTime="2026-02-28 13:45:35.708612738 +0000 UTC m=+1749.950933395" Feb 28 13:45:36 crc kubenswrapper[4897]: I0228 13:45:36.877039 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 28 13:45:37 crc kubenswrapper[4897]: E0228 13:45:37.459760 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:45:38 crc kubenswrapper[4897]: I0228 13:45:38.023207 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 28 13:45:38 crc kubenswrapper[4897]: I0228 13:45:38.708573 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerStarted","Data":"724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133"} Feb 28 13:45:42 crc kubenswrapper[4897]: I0228 13:45:42.620703 4897 scope.go:117] "RemoveContainer" containerID="d1f12039299e7cb97da8945c9666147f1e47e6cd32c2dedad659145cbb5b669a" Feb 28 13:45:42 crc kubenswrapper[4897]: I0228 13:45:42.691830 4897 scope.go:117] "RemoveContainer" containerID="5d834faf0a2a964bd1a733d6a451c5f5cf501ca1580df462e49e368f85e84643" Feb 28 13:45:42 crc kubenswrapper[4897]: I0228 13:45:42.729480 4897 scope.go:117] "RemoveContainer" containerID="f64c40bb8bb0bead2f13fc078c8aa3c8558c96702c83bab99d1dcf0f10f6f277" Feb 28 13:45:42 crc kubenswrapper[4897]: I0228 13:45:42.758261 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" event={"ID":"b6642318-7bfd-49f2-86e3-0fe4a7ec2709","Type":"ContainerStarted","Data":"24d994ad890b0ea5394ef01045d5ac591c2f7a280794ec71bbf87a48590534a5"} Feb 28 13:45:42 crc kubenswrapper[4897]: I0228 13:45:42.786413 4897 scope.go:117] "RemoveContainer" containerID="7402ed8cd9f105a778afcec0a107c1f88b1868bc72a3fd1276403b2e93f5e10a" Feb 28 13:45:42 crc kubenswrapper[4897]: I0228 13:45:42.787565 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" podStartSLOduration=1.526534775 podStartE2EDuration="5m42.78754713s" podCreationTimestamp="2026-02-28 13:40:00 +0000 UTC" firstStartedPulling="2026-02-28 13:40:01.046155517 +0000 UTC m=+1415.288476194" lastFinishedPulling="2026-02-28 13:45:42.307167882 +0000 UTC m=+1756.549488549" observedRunningTime="2026-02-28 13:45:42.782754982 +0000 UTC m=+1757.025075649" watchObservedRunningTime="2026-02-28 13:45:42.78754713 +0000 UTC m=+1757.029867787" Feb 28 13:45:42 crc kubenswrapper[4897]: I0228 13:45:42.807391 4897 scope.go:117] "RemoveContainer" containerID="02ca35ac78e6dbd22eeea8d41400267b89528481e692758ebd8312a4bfc76e9e" Feb 28 13:45:43 crc kubenswrapper[4897]: I0228 13:45:43.022815 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 28 13:45:43 crc kubenswrapper[4897]: I0228 13:45:43.031511 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 28 13:45:43 crc kubenswrapper[4897]: E0228 13:45:43.459398 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:45:43 crc kubenswrapper[4897]: I0228 13:45:43.782124 4897 generic.go:334] "Generic (PLEG): container finished" podID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" containerID="24d994ad890b0ea5394ef01045d5ac591c2f7a280794ec71bbf87a48590534a5" exitCode=0 Feb 28 13:45:43 crc kubenswrapper[4897]: I0228 13:45:43.782187 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" event={"ID":"b6642318-7bfd-49f2-86e3-0fe4a7ec2709","Type":"ContainerDied","Data":"24d994ad890b0ea5394ef01045d5ac591c2f7a280794ec71bbf87a48590534a5"} Feb 28 13:45:43 crc kubenswrapper[4897]: I0228 13:45:43.796473 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 28 13:45:44 crc kubenswrapper[4897]: I0228 13:45:44.344633 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 28 13:45:44 crc kubenswrapper[4897]: I0228 13:45:44.373128 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 28 13:45:44 crc kubenswrapper[4897]: E0228 13:45:44.457674 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:45:44 crc kubenswrapper[4897]: I0228 13:45:44.794100 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 28 13:45:44 crc kubenswrapper[4897]: I0228 13:45:44.841788 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.153301 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.200356 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpqzd\" (UniqueName: \"kubernetes.io/projected/b6642318-7bfd-49f2-86e3-0fe4a7ec2709-kube-api-access-zpqzd\") pod \"b6642318-7bfd-49f2-86e3-0fe4a7ec2709\" (UID: \"b6642318-7bfd-49f2-86e3-0fe4a7ec2709\") " Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.207589 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6642318-7bfd-49f2-86e3-0fe4a7ec2709-kube-api-access-zpqzd" (OuterVolumeSpecName: "kube-api-access-zpqzd") pod "b6642318-7bfd-49f2-86e3-0fe4a7ec2709" (UID: "b6642318-7bfd-49f2-86e3-0fe4a7ec2709"). InnerVolumeSpecName "kube-api-access-zpqzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.302898 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpqzd\" (UniqueName: \"kubernetes.io/projected/b6642318-7bfd-49f2-86e3-0fe4a7ec2709-kube-api-access-zpqzd\") on node \"crc\" DevicePath \"\"" Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.457380 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:45:45 crc kubenswrapper[4897]: E0228 13:45:45.457715 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.811654 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.811643 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538100-v4j6s" event={"ID":"b6642318-7bfd-49f2-86e3-0fe4a7ec2709","Type":"ContainerDied","Data":"e5a370cf2ed739f1193a2330cb570b34e39933c9313b910dffe71d078a1a324e"} Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.811832 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a370cf2ed739f1193a2330cb570b34e39933c9313b910dffe71d078a1a324e" Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.881893 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538098-9rtv9"] Feb 28 13:45:45 crc kubenswrapper[4897]: I0228 13:45:45.896098 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538098-9rtv9"] Feb 28 13:45:46 crc kubenswrapper[4897]: I0228 13:45:46.470301 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbeef463-3901-42c4-81ed-d97e793fb8b5" path="/var/lib/kubelet/pods/bbeef463-3901-42c4-81ed-d97e793fb8b5/volumes" Feb 28 13:45:46 crc kubenswrapper[4897]: I0228 13:45:46.837146 4897 generic.go:334] "Generic (PLEG): container finished" podID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerID="724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133" exitCode=0 Feb 28 13:45:46 crc kubenswrapper[4897]: I0228 13:45:46.838756 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerDied","Data":"724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133"} Feb 28 13:45:47 crc kubenswrapper[4897]: I0228 13:45:47.849912 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerStarted","Data":"b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357"} Feb 28 13:45:49 crc kubenswrapper[4897]: E0228 13:45:49.459164 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:45:50 crc kubenswrapper[4897]: I0228 13:45:50.884073 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerStarted","Data":"cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601"} Feb 28 13:45:51 crc kubenswrapper[4897]: I0228 13:45:51.900873 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerStarted","Data":"a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717"} Feb 28 13:45:51 crc kubenswrapper[4897]: I0228 13:45:51.945790 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.945769764 podStartE2EDuration="17.945769764s" podCreationTimestamp="2026-02-28 13:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:45:51.935860809 +0000 UTC m=+1766.178181506" watchObservedRunningTime="2026-02-28 13:45:51.945769764 +0000 UTC m=+1766.188090431" Feb 28 13:45:54 crc kubenswrapper[4897]: I0228 13:45:54.452841 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 28 13:45:56 crc kubenswrapper[4897]: E0228 13:45:56.472985 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:45:59 crc kubenswrapper[4897]: E0228 13:45:59.458681 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.166860 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538106-j6xmh"] Feb 28 13:46:00 crc kubenswrapper[4897]: E0228 13:46:00.167838 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" containerName="oc" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.167861 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" containerName="oc" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.168261 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" containerName="oc" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.169400 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538106-j6xmh" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.173211 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.173267 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.173284 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.213350 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538106-j6xmh"] Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.315671 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hf85\" (UniqueName: \"kubernetes.io/projected/64df0e32-0a86-4721-bf82-f6629e2268d8-kube-api-access-4hf85\") pod \"auto-csr-approver-29538106-j6xmh\" (UID: \"64df0e32-0a86-4721-bf82-f6629e2268d8\") " pod="openshift-infra/auto-csr-approver-29538106-j6xmh" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.418273 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hf85\" (UniqueName: \"kubernetes.io/projected/64df0e32-0a86-4721-bf82-f6629e2268d8-kube-api-access-4hf85\") pod \"auto-csr-approver-29538106-j6xmh\" (UID: \"64df0e32-0a86-4721-bf82-f6629e2268d8\") " pod="openshift-infra/auto-csr-approver-29538106-j6xmh" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.452085 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hf85\" (UniqueName: \"kubernetes.io/projected/64df0e32-0a86-4721-bf82-f6629e2268d8-kube-api-access-4hf85\") pod \"auto-csr-approver-29538106-j6xmh\" (UID: \"64df0e32-0a86-4721-bf82-f6629e2268d8\") " pod="openshift-infra/auto-csr-approver-29538106-j6xmh" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.459104 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:46:00 crc kubenswrapper[4897]: E0228 13:46:00.460003 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.496800 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538106-j6xmh" Feb 28 13:46:00 crc kubenswrapper[4897]: I0228 13:46:00.951870 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538106-j6xmh"] Feb 28 13:46:01 crc kubenswrapper[4897]: I0228 13:46:01.005142 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538106-j6xmh" event={"ID":"64df0e32-0a86-4721-bf82-f6629e2268d8","Type":"ContainerStarted","Data":"d2de6355778ac42909d8e99418ed845ddb2f186137a90a0d9fd62fbbad4756d6"} Feb 28 13:46:01 crc kubenswrapper[4897]: E0228 13:46:01.458184 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:46:03 crc kubenswrapper[4897]: I0228 13:46:03.040235 4897 generic.go:334] "Generic (PLEG): container finished" podID="64df0e32-0a86-4721-bf82-f6629e2268d8" containerID="45905544f1c45db0d58c7fbe4a464cb80d70c376a540d7ec631109337f1bcd4c" exitCode=0 Feb 28 13:46:03 crc kubenswrapper[4897]: I0228 13:46:03.041507 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538106-j6xmh" event={"ID":"64df0e32-0a86-4721-bf82-f6629e2268d8","Type":"ContainerDied","Data":"45905544f1c45db0d58c7fbe4a464cb80d70c376a540d7ec631109337f1bcd4c"} Feb 28 13:46:04 crc kubenswrapper[4897]: I0228 13:46:04.427340 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538106-j6xmh" Feb 28 13:46:04 crc kubenswrapper[4897]: I0228 13:46:04.452484 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 28 13:46:04 crc kubenswrapper[4897]: I0228 13:46:04.476856 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 28 13:46:04 crc kubenswrapper[4897]: I0228 13:46:04.606089 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hf85\" (UniqueName: \"kubernetes.io/projected/64df0e32-0a86-4721-bf82-f6629e2268d8-kube-api-access-4hf85\") pod \"64df0e32-0a86-4721-bf82-f6629e2268d8\" (UID: \"64df0e32-0a86-4721-bf82-f6629e2268d8\") " Feb 28 13:46:04 crc kubenswrapper[4897]: I0228 13:46:04.616444 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64df0e32-0a86-4721-bf82-f6629e2268d8-kube-api-access-4hf85" (OuterVolumeSpecName: "kube-api-access-4hf85") pod "64df0e32-0a86-4721-bf82-f6629e2268d8" (UID: "64df0e32-0a86-4721-bf82-f6629e2268d8"). InnerVolumeSpecName "kube-api-access-4hf85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:46:04 crc kubenswrapper[4897]: I0228 13:46:04.709283 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hf85\" (UniqueName: \"kubernetes.io/projected/64df0e32-0a86-4721-bf82-f6629e2268d8-kube-api-access-4hf85\") on node \"crc\" DevicePath \"\"" Feb 28 13:46:05 crc kubenswrapper[4897]: I0228 13:46:05.074720 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538106-j6xmh" Feb 28 13:46:05 crc kubenswrapper[4897]: I0228 13:46:05.074707 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538106-j6xmh" event={"ID":"64df0e32-0a86-4721-bf82-f6629e2268d8","Type":"ContainerDied","Data":"d2de6355778ac42909d8e99418ed845ddb2f186137a90a0d9fd62fbbad4756d6"} Feb 28 13:46:05 crc kubenswrapper[4897]: I0228 13:46:05.074834 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2de6355778ac42909d8e99418ed845ddb2f186137a90a0d9fd62fbbad4756d6" Feb 28 13:46:05 crc kubenswrapper[4897]: I0228 13:46:05.083469 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 28 13:46:05 crc kubenswrapper[4897]: I0228 13:46:05.529279 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538100-v4j6s"] Feb 28 13:46:05 crc kubenswrapper[4897]: I0228 13:46:05.538750 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538100-v4j6s"] Feb 28 13:46:06 crc kubenswrapper[4897]: I0228 13:46:06.475804 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6642318-7bfd-49f2-86e3-0fe4a7ec2709" path="/var/lib/kubelet/pods/b6642318-7bfd-49f2-86e3-0fe4a7ec2709/volumes" Feb 28 13:46:08 crc kubenswrapper[4897]: E0228 13:46:08.459569 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:46:11 crc kubenswrapper[4897]: I0228 13:46:11.456423 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:46:11 crc kubenswrapper[4897]: E0228 13:46:11.457132 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:46:11 crc kubenswrapper[4897]: E0228 13:46:11.458306 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:46:15 crc kubenswrapper[4897]: E0228 13:46:15.459093 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" Feb 28 13:46:20 crc kubenswrapper[4897]: E0228 13:46:20.459038 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:46:23 crc kubenswrapper[4897]: I0228 13:46:23.459851 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 13:46:24 crc kubenswrapper[4897]: E0228 13:46:24.101767 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 13:46:24 crc kubenswrapper[4897]: E0228 13:46:24.101941 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wpnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-29kqk_openshift-marketplace(dbe86f80-68e4-4170-8801-cea07c362d5c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:46:24 crc kubenswrapper[4897]: E0228 13:46:24.103360 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:46:24 crc kubenswrapper[4897]: I0228 13:46:24.457145 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:46:24 crc kubenswrapper[4897]: E0228 13:46:24.458087 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:46:34 crc kubenswrapper[4897]: E0228 13:46:34.030594 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:46:34 crc kubenswrapper[4897]: E0228 13:46:34.058724 4897 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = get image fs info unable to get usage for /var/lib/containers/storage/overlay-images: get disk usage for path /var/lib/containers/storage/overlay-images: lstat /var/lib/containers/storage/overlay-images/.tmp-images.json2257881379: no such file or directory" Feb 28 13:46:34 crc kubenswrapper[4897]: E0228 13:46:34.059155 4897 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="missing image stats: nil" Feb 28 13:46:35 crc kubenswrapper[4897]: E0228 13:46:35.459514 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:46:35 crc kubenswrapper[4897]: I0228 13:46:35.483575 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerStarted","Data":"cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec"} Feb 28 13:46:35 crc kubenswrapper[4897]: I0228 13:46:35.483958 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 13:46:35 crc kubenswrapper[4897]: I0228 13:46:35.549384 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.697376486 podStartE2EDuration="5m47.549357024s" podCreationTimestamp="2026-02-28 13:40:48 +0000 UTC" firstStartedPulling="2026-02-28 13:40:49.265112657 +0000 UTC m=+1463.507433304" lastFinishedPulling="2026-02-28 13:46:34.117093185 +0000 UTC m=+1808.359413842" observedRunningTime="2026-02-28 13:46:35.532848729 +0000 UTC m=+1809.775169446" watchObservedRunningTime="2026-02-28 13:46:35.549357024 +0000 UTC m=+1809.791677721" Feb 28 13:46:39 crc kubenswrapper[4897]: I0228 13:46:39.458197 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:46:39 crc kubenswrapper[4897]: E0228 13:46:39.459594 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:46:42 crc kubenswrapper[4897]: I0228 13:46:42.978926 4897 scope.go:117] "RemoveContainer" containerID="5686f151e991ecb04d64a39f69be073524971284a598c4c811cc8cbfaada4cbb" Feb 28 13:46:43 crc kubenswrapper[4897]: I0228 13:46:43.058367 4897 scope.go:117] "RemoveContainer" containerID="c3632e4a3c7ef8eeab10572c630804218648e3d70abb15feafefdbeecc990345" Feb 28 13:46:43 crc kubenswrapper[4897]: I0228 13:46:43.106468 4897 scope.go:117] "RemoveContainer" containerID="ff5a3aa8da48ae602c1e71a518e88d7ef2ec3938afe38f831efe7f7d1dc8a26b" Feb 28 13:46:47 crc kubenswrapper[4897]: E0228 13:46:47.459671 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" Feb 28 13:46:48 crc kubenswrapper[4897]: I0228 13:46:48.774769 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 13:46:49 crc kubenswrapper[4897]: E0228 13:46:49.458528 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:46:52 crc kubenswrapper[4897]: I0228 13:46:52.456982 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:46:52 crc kubenswrapper[4897]: E0228 13:46:52.457603 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:46:52 crc kubenswrapper[4897]: I0228 13:46:52.706112 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:46:52 crc kubenswrapper[4897]: I0228 13:46:52.706432 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="631372ff-9a4e-4110-9ff4-aad528049a06" containerName="kube-state-metrics" containerID="cri-o://a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b" gracePeriod=30 Feb 28 13:46:52 crc kubenswrapper[4897]: I0228 13:46:52.856810 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="631372ff-9a4e-4110-9ff4-aad528049a06" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": dial tcp 10.217.0.115:8081: connect: connection refused" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.187378 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.368729 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sf7p\" (UniqueName: \"kubernetes.io/projected/631372ff-9a4e-4110-9ff4-aad528049a06-kube-api-access-8sf7p\") pod \"631372ff-9a4e-4110-9ff4-aad528049a06\" (UID: \"631372ff-9a4e-4110-9ff4-aad528049a06\") " Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.391590 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631372ff-9a4e-4110-9ff4-aad528049a06-kube-api-access-8sf7p" (OuterVolumeSpecName: "kube-api-access-8sf7p") pod "631372ff-9a4e-4110-9ff4-aad528049a06" (UID: "631372ff-9a4e-4110-9ff4-aad528049a06"). InnerVolumeSpecName "kube-api-access-8sf7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.470919 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sf7p\" (UniqueName: \"kubernetes.io/projected/631372ff-9a4e-4110-9ff4-aad528049a06-kube-api-access-8sf7p\") on node \"crc\" DevicePath \"\"" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.713619 4897 generic.go:334] "Generic (PLEG): container finished" podID="631372ff-9a4e-4110-9ff4-aad528049a06" containerID="a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b" exitCode=2 Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.713659 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"631372ff-9a4e-4110-9ff4-aad528049a06","Type":"ContainerDied","Data":"a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b"} Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.713684 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"631372ff-9a4e-4110-9ff4-aad528049a06","Type":"ContainerDied","Data":"5ef0e38f67ae009632a0f5ed5d477fad0500f0431e93d665abf36af40e9a8ca3"} Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.713705 4897 scope.go:117] "RemoveContainer" containerID="a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.713709 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.745519 4897 scope.go:117] "RemoveContainer" containerID="a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b" Feb 28 13:46:53 crc kubenswrapper[4897]: E0228 13:46:53.746799 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b\": container with ID starting with a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b not found: ID does not exist" containerID="a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.746922 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b"} err="failed to get container status \"a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b\": rpc error: code = NotFound desc = could not find container \"a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b\": container with ID starting with a482e8816071c9e7694e38f21e32d4856578ffd8df77fed9c36199f5adde8d1b not found: ID does not exist" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.766512 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.777029 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.798283 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:46:53 crc kubenswrapper[4897]: E0228 13:46:53.798817 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631372ff-9a4e-4110-9ff4-aad528049a06" containerName="kube-state-metrics" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.798844 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="631372ff-9a4e-4110-9ff4-aad528049a06" containerName="kube-state-metrics" Feb 28 13:46:53 crc kubenswrapper[4897]: E0228 13:46:53.798866 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64df0e32-0a86-4721-bf82-f6629e2268d8" containerName="oc" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.798879 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="64df0e32-0a86-4721-bf82-f6629e2268d8" containerName="oc" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.799130 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="631372ff-9a4e-4110-9ff4-aad528049a06" containerName="kube-state-metrics" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.799169 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="64df0e32-0a86-4721-bf82-f6629e2268d8" containerName="oc" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.800003 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.803466 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.804079 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.814345 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.980355 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.980406 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.980428 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqmhv\" (UniqueName: \"kubernetes.io/projected/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-api-access-bqmhv\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:53 crc kubenswrapper[4897]: I0228 13:46:53.980447 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.082886 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.082993 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.083031 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqmhv\" (UniqueName: \"kubernetes.io/projected/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-api-access-bqmhv\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.083072 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.090638 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.090788 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.093787 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.104897 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqmhv\" (UniqueName: \"kubernetes.io/projected/d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf-kube-api-access-bqmhv\") pod \"kube-state-metrics-0\" (UID: \"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf\") " pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.136502 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.478937 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631372ff-9a4e-4110-9ff4-aad528049a06" path="/var/lib/kubelet/pods/631372ff-9a4e-4110-9ff4-aad528049a06/volumes" Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.632941 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 13:46:54 crc kubenswrapper[4897]: W0228 13:46:54.638780 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c5123c_5d3c_47f6_b0d5_20e731e7ebaf.slice/crio-e1248b6224f711a66d494f0d67c32442c2bc3fff10ea1a9b3c5d731ea7cab332 WatchSource:0}: Error finding container e1248b6224f711a66d494f0d67c32442c2bc3fff10ea1a9b3c5d731ea7cab332: Status 404 returned error can't find the container with id e1248b6224f711a66d494f0d67c32442c2bc3fff10ea1a9b3c5d731ea7cab332 Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.724962 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf","Type":"ContainerStarted","Data":"e1248b6224f711a66d494f0d67c32442c2bc3fff10ea1a9b3c5d731ea7cab332"} Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.743721 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.744040 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="ceilometer-central-agent" containerID="cri-o://5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823" gracePeriod=30 Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.744094 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="sg-core" containerID="cri-o://2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0" gracePeriod=30 Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.744096 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="proxy-httpd" containerID="cri-o://cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec" gracePeriod=30 Feb 28 13:46:54 crc kubenswrapper[4897]: I0228 13:46:54.744188 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="ceilometer-notification-agent" containerID="cri-o://c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5" gracePeriod=30 Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.736169 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf","Type":"ContainerStarted","Data":"45951f3b5be0cee72c54e3c2fffbe065355291c6004e2abed504f65a1dde04b6"} Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.736622 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.739803 4897 generic.go:334] "Generic (PLEG): container finished" podID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerID="cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec" exitCode=0 Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.739862 4897 generic.go:334] "Generic (PLEG): container finished" podID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerID="2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0" exitCode=2 Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.739878 4897 generic.go:334] "Generic (PLEG): container finished" podID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerID="5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823" exitCode=0 Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.739869 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerDied","Data":"cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec"} Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.739943 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerDied","Data":"2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0"} Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.739965 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerDied","Data":"5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823"} Feb 28 13:46:55 crc kubenswrapper[4897]: I0228 13:46:55.754686 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.408904147 podStartE2EDuration="2.754666912s" podCreationTimestamp="2026-02-28 13:46:53 +0000 UTC" firstStartedPulling="2026-02-28 13:46:54.641017358 +0000 UTC m=+1828.883338015" lastFinishedPulling="2026-02-28 13:46:54.986780123 +0000 UTC m=+1829.229100780" observedRunningTime="2026-02-28 13:46:55.750410669 +0000 UTC m=+1829.992731366" watchObservedRunningTime="2026-02-28 13:46:55.754666912 +0000 UTC m=+1829.996987609" Feb 28 13:47:00 crc kubenswrapper[4897]: I0228 13:47:00.730055 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:47:00 crc kubenswrapper[4897]: I0228 13:47:00.789114 4897 generic.go:334] "Generic (PLEG): container finished" podID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerID="ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd" exitCode=0 Feb 28 13:47:00 crc kubenswrapper[4897]: I0228 13:47:00.789192 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjhzq" event={"ID":"14af1d5e-f67c-4675-afb4-4aff4b78237c","Type":"ContainerDied","Data":"ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd"} Feb 28 13:47:01 crc kubenswrapper[4897]: I0228 13:47:01.800191 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjhzq" event={"ID":"14af1d5e-f67c-4675-afb4-4aff4b78237c","Type":"ContainerStarted","Data":"253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f"} Feb 28 13:47:01 crc kubenswrapper[4897]: I0228 13:47:01.826574 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bjhzq" podStartSLOduration=2.103555717 podStartE2EDuration="5m33.826556357s" podCreationTimestamp="2026-02-28 13:41:28 +0000 UTC" firstStartedPulling="2026-02-28 13:41:29.425147973 +0000 UTC m=+1503.667468630" lastFinishedPulling="2026-02-28 13:47:01.148148603 +0000 UTC m=+1835.390469270" observedRunningTime="2026-02-28 13:47:01.820157043 +0000 UTC m=+1836.062477700" watchObservedRunningTime="2026-02-28 13:47:01.826556357 +0000 UTC m=+1836.068877014" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.449864 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.462973 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.488366 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-run-httpd\") pod \"746916d9-ca42-480b-9aa7-7e1fe9803900\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.488427 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-log-httpd\") pod \"746916d9-ca42-480b-9aa7-7e1fe9803900\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.488452 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-scripts\") pod \"746916d9-ca42-480b-9aa7-7e1fe9803900\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.488473 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-sg-core-conf-yaml\") pod \"746916d9-ca42-480b-9aa7-7e1fe9803900\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.488564 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hjth\" (UniqueName: \"kubernetes.io/projected/746916d9-ca42-480b-9aa7-7e1fe9803900-kube-api-access-7hjth\") pod \"746916d9-ca42-480b-9aa7-7e1fe9803900\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.488590 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-combined-ca-bundle\") pod \"746916d9-ca42-480b-9aa7-7e1fe9803900\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.489525 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "746916d9-ca42-480b-9aa7-7e1fe9803900" (UID: "746916d9-ca42-480b-9aa7-7e1fe9803900"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.491680 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "746916d9-ca42-480b-9aa7-7e1fe9803900" (UID: "746916d9-ca42-480b-9aa7-7e1fe9803900"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.506981 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-scripts" (OuterVolumeSpecName: "scripts") pod "746916d9-ca42-480b-9aa7-7e1fe9803900" (UID: "746916d9-ca42-480b-9aa7-7e1fe9803900"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.507467 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746916d9-ca42-480b-9aa7-7e1fe9803900-kube-api-access-7hjth" (OuterVolumeSpecName: "kube-api-access-7hjth") pod "746916d9-ca42-480b-9aa7-7e1fe9803900" (UID: "746916d9-ca42-480b-9aa7-7e1fe9803900"). InnerVolumeSpecName "kube-api-access-7hjth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.587864 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.589798 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-config-data\") pod \"746916d9-ca42-480b-9aa7-7e1fe9803900\" (UID: \"746916d9-ca42-480b-9aa7-7e1fe9803900\") " Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.601872 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.601898 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/746916d9-ca42-480b-9aa7-7e1fe9803900-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.601907 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.601916 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hjth\" (UniqueName: \"kubernetes.io/projected/746916d9-ca42-480b-9aa7-7e1fe9803900-kube-api-access-7hjth\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.683547 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "746916d9-ca42-480b-9aa7-7e1fe9803900" (UID: "746916d9-ca42-480b-9aa7-7e1fe9803900"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.691237 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "746916d9-ca42-480b-9aa7-7e1fe9803900" (UID: "746916d9-ca42-480b-9aa7-7e1fe9803900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.703187 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.703215 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.793913 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-config-data" (OuterVolumeSpecName: "config-data") pod "746916d9-ca42-480b-9aa7-7e1fe9803900" (UID: "746916d9-ca42-480b-9aa7-7e1fe9803900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.805014 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746916d9-ca42-480b-9aa7-7e1fe9803900-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.810901 4897 generic.go:334] "Generic (PLEG): container finished" podID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerID="c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5" exitCode=0 Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.810937 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerDied","Data":"c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5"} Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.810972 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"746916d9-ca42-480b-9aa7-7e1fe9803900","Type":"ContainerDied","Data":"08843746f6de77de835512e0a0ff0caea3fa279a1e86745f2f2d71c39346fc01"} Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.810988 4897 scope.go:117] "RemoveContainer" containerID="cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.811120 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.837287 4897 scope.go:117] "RemoveContainer" containerID="2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.848082 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.854983 4897 scope.go:117] "RemoveContainer" containerID="c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.859752 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.883516 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.883961 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="proxy-httpd" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.883984 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="proxy-httpd" Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.884009 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="ceilometer-central-agent" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.884018 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="ceilometer-central-agent" Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.884040 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="sg-core" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.884049 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="sg-core" Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.884078 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="ceilometer-notification-agent" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.884085 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="ceilometer-notification-agent" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.884336 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="ceilometer-notification-agent" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.884362 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="proxy-httpd" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.884376 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="ceilometer-central-agent" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.884395 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" containerName="sg-core" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.886196 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.889705 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.889875 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.890017 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.904409 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.906263 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-scripts\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.906301 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-log-httpd\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.906340 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.906446 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-config-data\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.906470 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nmmv\" (UniqueName: \"kubernetes.io/projected/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-kube-api-access-6nmmv\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.906501 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.906534 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-run-httpd\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.906562 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.919001 4897 scope.go:117] "RemoveContainer" containerID="5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.951286 4897 scope.go:117] "RemoveContainer" containerID="cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec" Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.951749 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec\": container with ID starting with cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec not found: ID does not exist" containerID="cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.951779 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec"} err="failed to get container status \"cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec\": rpc error: code = NotFound desc = could not find container \"cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec\": container with ID starting with cfb9af0c2ab9b3b28d9fc62d358d84fb31a7b1134a89abc6e50e5d6c45fb3aec not found: ID does not exist" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.951799 4897 scope.go:117] "RemoveContainer" containerID="2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0" Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.952159 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0\": container with ID starting with 2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0 not found: ID does not exist" containerID="2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.952221 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0"} err="failed to get container status \"2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0\": rpc error: code = NotFound desc = could not find container \"2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0\": container with ID starting with 2aed513f7ea441e8f48b2416f9d0e69f1110a2d3599ac6e592e7fe096c58d3f0 not found: ID does not exist" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.952253 4897 scope.go:117] "RemoveContainer" containerID="c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5" Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.953672 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5\": container with ID starting with c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5 not found: ID does not exist" containerID="c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.953713 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5"} err="failed to get container status \"c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5\": rpc error: code = NotFound desc = could not find container \"c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5\": container with ID starting with c5736d48a4d753ac43a8b6285f73debaa020c583f9298da242515985f99b97e5 not found: ID does not exist" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.953730 4897 scope.go:117] "RemoveContainer" containerID="5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823" Feb 28 13:47:02 crc kubenswrapper[4897]: E0228 13:47:02.954299 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823\": container with ID starting with 5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823 not found: ID does not exist" containerID="5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823" Feb 28 13:47:02 crc kubenswrapper[4897]: I0228 13:47:02.954345 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823"} err="failed to get container status \"5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823\": rpc error: code = NotFound desc = could not find container \"5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823\": container with ID starting with 5aa7fdfaf878368037da6b5738e044a1a5f0f16620740977740f6d3e7aebc823 not found: ID does not exist" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008097 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008171 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-scripts\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008198 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-log-httpd\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008219 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008301 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-config-data\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008341 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nmmv\" (UniqueName: \"kubernetes.io/projected/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-kube-api-access-6nmmv\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008378 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008412 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-run-httpd\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.008865 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-run-httpd\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.009249 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-log-httpd\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.012781 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.013064 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.013658 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-config-data\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.014465 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-scripts\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.016969 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.035815 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nmmv\" (UniqueName: \"kubernetes.io/projected/49ad0c65-4304-477c-8cfa-c344fcf2ab9b-kube-api-access-6nmmv\") pod \"ceilometer-0\" (UID: \"49ad0c65-4304-477c-8cfa-c344fcf2ab9b\") " pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.208454 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.719367 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 13:47:03 crc kubenswrapper[4897]: W0228 13:47:03.728908 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49ad0c65_4304_477c_8cfa_c344fcf2ab9b.slice/crio-7be5160748f3bac69b4ed57dab7335b8df16090c4057ad998b49ecb0fc1b4cd3 WatchSource:0}: Error finding container 7be5160748f3bac69b4ed57dab7335b8df16090c4057ad998b49ecb0fc1b4cd3: Status 404 returned error can't find the container with id 7be5160748f3bac69b4ed57dab7335b8df16090c4057ad998b49ecb0fc1b4cd3 Feb 28 13:47:03 crc kubenswrapper[4897]: I0228 13:47:03.824052 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ad0c65-4304-477c-8cfa-c344fcf2ab9b","Type":"ContainerStarted","Data":"7be5160748f3bac69b4ed57dab7335b8df16090c4057ad998b49ecb0fc1b4cd3"} Feb 28 13:47:04 crc kubenswrapper[4897]: I0228 13:47:04.147301 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 28 13:47:04 crc kubenswrapper[4897]: I0228 13:47:04.466663 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746916d9-ca42-480b-9aa7-7e1fe9803900" path="/var/lib/kubelet/pods/746916d9-ca42-480b-9aa7-7e1fe9803900/volumes" Feb 28 13:47:04 crc kubenswrapper[4897]: I0228 13:47:04.832742 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ad0c65-4304-477c-8cfa-c344fcf2ab9b","Type":"ContainerStarted","Data":"7ee1358b3de6558763a9b50d7c03a89814c60d6e048caffc9225e16b271bc6e3"} Feb 28 13:47:04 crc kubenswrapper[4897]: I0228 13:47:04.832782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ad0c65-4304-477c-8cfa-c344fcf2ab9b","Type":"ContainerStarted","Data":"e02e3b910adc4d04875defbe4da8202f988ca98fb8246f432f7ffde2e24c0898"} Feb 28 13:47:04 crc kubenswrapper[4897]: I0228 13:47:04.934136 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerName="rabbitmq" containerID="cri-o://7088677dc187a4128ab508e2ee1e1b9ad4c18d0a82798cb7cfcb8392f0127126" gracePeriod=604796 Feb 28 13:47:05 crc kubenswrapper[4897]: I0228 13:47:05.851451 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ad0c65-4304-477c-8cfa-c344fcf2ab9b","Type":"ContainerStarted","Data":"90e45c91c5448262af5a8f6ecd6a3f2ef95c1b8828208e2b20215fbca93fc556"} Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.078251 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zqjtm"] Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.094525 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7ndv6"] Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.109369 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bfa7-account-create-update-8mjtn"] Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.116412 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zqjtm"] Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.131356 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7ndv6"] Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.143749 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7a2e-account-create-update-dsfp4"] Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.154858 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bfa7-account-create-update-8mjtn"] Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.165927 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7a2e-account-create-update-dsfp4"] Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.389074 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerName="rabbitmq" containerID="cri-o://5a48d58771b6ebaaefeaac2908a8631795907b9109de8da771b2087fa08dc7a5" gracePeriod=604797 Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.467439 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ce242f-6ba6-48c7-9c41-e00c21dfb085" path="/var/lib/kubelet/pods/09ce242f-6ba6-48c7-9c41-e00c21dfb085/volumes" Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.467475 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:47:06 crc kubenswrapper[4897]: E0228 13:47:06.467677 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.468015 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dde6f07-ea2d-40ba-9a07-12fcc461a0ee" path="/var/lib/kubelet/pods/2dde6f07-ea2d-40ba-9a07-12fcc461a0ee/volumes" Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.468628 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a4b26d-e794-43fa-991d-55679de18394" path="/var/lib/kubelet/pods/47a4b26d-e794-43fa-991d-55679de18394/volumes" Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.469188 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e4700f-f6cc-44bb-93b6-a58f1e74b0df" path="/var/lib/kubelet/pods/a0e4700f-f6cc-44bb-93b6-a58f1e74b0df/volumes" Feb 28 13:47:06 crc kubenswrapper[4897]: E0228 13:47:06.656768 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:47:06 crc kubenswrapper[4897]: E0228 13:47:06.656920 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/tls.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/tls.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nmmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(49ad0c65-4304-477c-8cfa-c344fcf2ab9b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:47:06 crc kubenswrapper[4897]: E0228 13:47:06.658187 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="49ad0c65-4304-477c-8cfa-c344fcf2ab9b" Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.829390 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.863036 4897 generic.go:334] "Generic (PLEG): container finished" podID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerID="7088677dc187a4128ab508e2ee1e1b9ad4c18d0a82798cb7cfcb8392f0127126" exitCode=0 Feb 28 13:47:06 crc kubenswrapper[4897]: I0228 13:47:06.863109 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a792d6c-3a28-4775-87bf-b099ea550a00","Type":"ContainerDied","Data":"7088677dc187a4128ab508e2ee1e1b9ad4c18d0a82798cb7cfcb8392f0127126"} Feb 28 13:47:06 crc kubenswrapper[4897]: E0228 13:47:06.864999 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="49ad0c65-4304-477c-8cfa-c344fcf2ab9b" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.030839 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-sdxh6"] Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.042485 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-90a5-account-create-update-6bzw4"] Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.050714 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-sdxh6"] Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.059579 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-90a5-account-create-update-6bzw4"] Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.150336 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.231344 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290099 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-plugins\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290199 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9vbj\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-kube-api-access-j9vbj\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290229 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-server-conf\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290266 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a792d6c-3a28-4775-87bf-b099ea550a00-pod-info\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290350 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-plugins-conf\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290383 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290485 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-config-data\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290514 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-tls\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290545 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a792d6c-3a28-4775-87bf-b099ea550a00-erlang-cookie-secret\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290586 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-confd\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.290602 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-erlang-cookie\") pod \"5a792d6c-3a28-4775-87bf-b099ea550a00\" (UID: \"5a792d6c-3a28-4775-87bf-b099ea550a00\") " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.297723 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.300498 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.302847 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-kube-api-access-j9vbj" (OuterVolumeSpecName: "kube-api-access-j9vbj") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "kube-api-access-j9vbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.310761 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a792d6c-3a28-4775-87bf-b099ea550a00-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.334248 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.335072 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.335490 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5a792d6c-3a28-4775-87bf-b099ea550a00-pod-info" (OuterVolumeSpecName: "pod-info") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.336281 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.336612 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-config-data" (OuterVolumeSpecName: "config-data") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.371809 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-server-conf" (OuterVolumeSpecName: "server-conf") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394089 4897 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394129 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394138 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394147 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394157 4897 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a792d6c-3a28-4775-87bf-b099ea550a00-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394166 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394176 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394186 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9vbj\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-kube-api-access-j9vbj\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394194 4897 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a792d6c-3a28-4775-87bf-b099ea550a00-server-conf\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.394201 4897 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a792d6c-3a28-4775-87bf-b099ea550a00-pod-info\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.416078 4897 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.496321 4897 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.505933 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5a792d6c-3a28-4775-87bf-b099ea550a00" (UID: "5a792d6c-3a28-4775-87bf-b099ea550a00"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.601611 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a792d6c-3a28-4775-87bf-b099ea550a00-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.922590 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a792d6c-3a28-4775-87bf-b099ea550a00","Type":"ContainerDied","Data":"b67c44951723863ac485ae3266e9f28aa96512781926061443d37a2024246083"} Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.922628 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.922919 4897 scope.go:117] "RemoveContainer" containerID="7088677dc187a4128ab508e2ee1e1b9ad4c18d0a82798cb7cfcb8392f0127126" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.933680 4897 generic.go:334] "Generic (PLEG): container finished" podID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerID="5a48d58771b6ebaaefeaac2908a8631795907b9109de8da771b2087fa08dc7a5" exitCode=0 Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.933733 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bf46d42-2d7e-410d-8a74-1ce12bb280b2","Type":"ContainerDied","Data":"5a48d58771b6ebaaefeaac2908a8631795907b9109de8da771b2087fa08dc7a5"} Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.959505 4897 scope.go:117] "RemoveContainer" containerID="fe8050bd404884f66eddbd6adbe7f7bd94e5332f6f5879701dcd60a3e7709119" Feb 28 13:47:07 crc kubenswrapper[4897]: I0228 13:47:07.998596 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.027294 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.048410 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.063435 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:47:08 crc kubenswrapper[4897]: E0228 13:47:08.063917 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerName="setup-container" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.063941 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerName="setup-container" Feb 28 13:47:08 crc kubenswrapper[4897]: E0228 13:47:08.063973 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerName="rabbitmq" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.063981 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerName="rabbitmq" Feb 28 13:47:08 crc kubenswrapper[4897]: E0228 13:47:08.064001 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerName="setup-container" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.064009 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerName="setup-container" Feb 28 13:47:08 crc kubenswrapper[4897]: E0228 13:47:08.064027 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerName="rabbitmq" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.064036 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerName="rabbitmq" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.064258 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" containerName="rabbitmq" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.064279 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" containerName="rabbitmq" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.065512 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.070961 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.071491 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.071835 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.074672 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.074948 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.075088 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pnfj4" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.075206 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.103279 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.125970 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-pod-info\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126013 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-confd\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126109 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-plugins-conf\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126125 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-erlang-cookie-secret\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126187 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-erlang-cookie\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126205 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-config-data\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126243 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwxsd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-kube-api-access-rwxsd\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126265 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-server-conf\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126283 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-plugins\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126374 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.126394 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-tls\") pod \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\" (UID: \"6bf46d42-2d7e-410d-8a74-1ce12bb280b2\") " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.133018 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.133397 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.135033 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-kube-api-access-rwxsd" (OuterVolumeSpecName: "kube-api-access-rwxsd") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "kube-api-access-rwxsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.135609 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.137068 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-pod-info" (OuterVolumeSpecName: "pod-info") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.137072 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.138376 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.142561 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.184265 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-config-data" (OuterVolumeSpecName: "config-data") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.244702 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-server-conf" (OuterVolumeSpecName: "server-conf") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.245674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.245748 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhg2t\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-kube-api-access-nhg2t\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.245802 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.245844 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.245869 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.245906 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.245958 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246018 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246095 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246147 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246196 4897 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246206 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246224 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246233 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246242 4897 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246250 4897 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246259 4897 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246267 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246277 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.246286 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwxsd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-kube-api-access-rwxsd\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.293774 4897 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.309550 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6bf46d42-2d7e-410d-8a74-1ce12bb280b2" (UID: "6bf46d42-2d7e-410d-8a74-1ce12bb280b2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.347857 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.347918 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.347957 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhg2t\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-kube-api-access-nhg2t\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.347991 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348020 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348037 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348063 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348098 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348116 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348133 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348177 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348242 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bf46d42-2d7e-410d-8a74-1ce12bb280b2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348257 4897 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348408 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348745 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.348825 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.349094 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.349387 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.350596 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.352098 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.353833 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.355825 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.363988 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.380076 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhg2t\" (UniqueName: \"kubernetes.io/projected/0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd-kube-api-access-nhg2t\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.392947 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd\") " pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.467555 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f47bf46-033d-4191-a8a5-45ba3bc854e4" path="/var/lib/kubelet/pods/2f47bf46-033d-4191-a8a5-45ba3bc854e4/volumes" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.468700 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a792d6c-3a28-4775-87bf-b099ea550a00" path="/var/lib/kubelet/pods/5a792d6c-3a28-4775-87bf-b099ea550a00/volumes" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.470251 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa284414-c19d-466b-a36e-6873e0e3c200" path="/var/lib/kubelet/pods/fa284414-c19d-466b-a36e-6873e0e3c200/volumes" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.599587 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.599956 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.663868 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.689247 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.947404 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.947713 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6bf46d42-2d7e-410d-8a74-1ce12bb280b2","Type":"ContainerDied","Data":"46ab728f52359146111b9eafd7e39ce4b85351f723695b66b4128f8d614e9490"} Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.947761 4897 scope.go:117] "RemoveContainer" containerID="5a48d58771b6ebaaefeaac2908a8631795907b9109de8da771b2087fa08dc7a5" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.971162 4897 scope.go:117] "RemoveContainer" containerID="4e25f72a41edbd1b43773b05d08492b582421f5b717fe5a90ecfa8d2cb7b0d38" Feb 28 13:47:08 crc kubenswrapper[4897]: I0228 13:47:08.992245 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.001380 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.019335 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.024203 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.028227 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.031805 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.031947 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.033778 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.033990 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.034126 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.037761 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-t2zvl" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.037895 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.042889 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.111699 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjhzq"] Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.170567 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.170877 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.170911 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.170936 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llfqt\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-kube-api-access-llfqt\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.170982 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.170998 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.171015 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.171038 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.171052 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.171088 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.171106 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.179633 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272171 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272218 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llfqt\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-kube-api-access-llfqt\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272294 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272328 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272346 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272371 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272388 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272422 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272437 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272457 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.272654 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.273319 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.273465 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.273551 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.274164 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.276124 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.276917 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.280478 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.287889 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.288432 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.298883 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llfqt\" (UniqueName: \"kubernetes.io/projected/59883b9c-0fbf-4d9e-84ee-f9456a6f13aa-kube-api-access-llfqt\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.316201 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.343726 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.825797 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 13:47:09 crc kubenswrapper[4897]: W0228 13:47:09.829346 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59883b9c_0fbf_4d9e_84ee_f9456a6f13aa.slice/crio-54604d62d1a36cc9590811e52d29723e414dae7cc66c062e327a03601e15c242 WatchSource:0}: Error finding container 54604d62d1a36cc9590811e52d29723e414dae7cc66c062e327a03601e15c242: Status 404 returned error can't find the container with id 54604d62d1a36cc9590811e52d29723e414dae7cc66c062e327a03601e15c242 Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.958272 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa","Type":"ContainerStarted","Data":"54604d62d1a36cc9590811e52d29723e414dae7cc66c062e327a03601e15c242"} Feb 28 13:47:09 crc kubenswrapper[4897]: I0228 13:47:09.959172 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd","Type":"ContainerStarted","Data":"11d8634f586a9d0dfc0dadc76030975a70f63b1e43b9237c628baa4ae87c0bf3"} Feb 28 13:47:10 crc kubenswrapper[4897]: I0228 13:47:10.046813 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d97d-account-create-update-7w7s6"] Feb 28 13:47:10 crc kubenswrapper[4897]: I0228 13:47:10.058151 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-6wrh6"] Feb 28 13:47:10 crc kubenswrapper[4897]: I0228 13:47:10.069574 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-6wrh6"] Feb 28 13:47:10 crc kubenswrapper[4897]: I0228 13:47:10.078116 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d97d-account-create-update-7w7s6"] Feb 28 13:47:10 crc kubenswrapper[4897]: I0228 13:47:10.477926 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4546a11d-5dfc-4055-9b4e-56838508d1fe" path="/var/lib/kubelet/pods/4546a11d-5dfc-4055-9b4e-56838508d1fe/volumes" Feb 28 13:47:10 crc kubenswrapper[4897]: I0228 13:47:10.479910 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf46d42-2d7e-410d-8a74-1ce12bb280b2" path="/var/lib/kubelet/pods/6bf46d42-2d7e-410d-8a74-1ce12bb280b2/volumes" Feb 28 13:47:10 crc kubenswrapper[4897]: I0228 13:47:10.481112 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d4936d-bd8d-426b-9799-ac02f672fe1a" path="/var/lib/kubelet/pods/82d4936d-bd8d-426b-9799-ac02f672fe1a/volumes" Feb 28 13:47:10 crc kubenswrapper[4897]: I0228 13:47:10.968052 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bjhzq" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerName="registry-server" containerID="cri-o://253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f" gracePeriod=2 Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.389427 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.539187 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-catalog-content\") pod \"14af1d5e-f67c-4675-afb4-4aff4b78237c\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.539275 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdxzw\" (UniqueName: \"kubernetes.io/projected/14af1d5e-f67c-4675-afb4-4aff4b78237c-kube-api-access-vdxzw\") pod \"14af1d5e-f67c-4675-afb4-4aff4b78237c\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.539380 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-utilities\") pod \"14af1d5e-f67c-4675-afb4-4aff4b78237c\" (UID: \"14af1d5e-f67c-4675-afb4-4aff4b78237c\") " Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.540919 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-utilities" (OuterVolumeSpecName: "utilities") pod "14af1d5e-f67c-4675-afb4-4aff4b78237c" (UID: "14af1d5e-f67c-4675-afb4-4aff4b78237c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.545367 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14af1d5e-f67c-4675-afb4-4aff4b78237c-kube-api-access-vdxzw" (OuterVolumeSpecName: "kube-api-access-vdxzw") pod "14af1d5e-f67c-4675-afb4-4aff4b78237c" (UID: "14af1d5e-f67c-4675-afb4-4aff4b78237c"). InnerVolumeSpecName "kube-api-access-vdxzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.568508 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14af1d5e-f67c-4675-afb4-4aff4b78237c" (UID: "14af1d5e-f67c-4675-afb4-4aff4b78237c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.643344 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.643374 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdxzw\" (UniqueName: \"kubernetes.io/projected/14af1d5e-f67c-4675-afb4-4aff4b78237c-kube-api-access-vdxzw\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.643386 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af1d5e-f67c-4675-afb4-4aff4b78237c-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.979668 4897 generic.go:334] "Generic (PLEG): container finished" podID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerID="253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f" exitCode=0 Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.979712 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjhzq" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.979721 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjhzq" event={"ID":"14af1d5e-f67c-4675-afb4-4aff4b78237c","Type":"ContainerDied","Data":"253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f"} Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.979788 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjhzq" event={"ID":"14af1d5e-f67c-4675-afb4-4aff4b78237c","Type":"ContainerDied","Data":"2c1b7e551e717f20d4943bd55eb4848b43081d758370c6a77422c644067eb4df"} Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.979812 4897 scope.go:117] "RemoveContainer" containerID="253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f" Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.981871 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd","Type":"ContainerStarted","Data":"1633921c84deff1591c31d7fb3fbc877d36bb21584568838b8b22f705962a99e"} Feb 28 13:47:11 crc kubenswrapper[4897]: I0228 13:47:11.991838 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa","Type":"ContainerStarted","Data":"3c1e1eca2e1e637f8be1ac0febf21565be00a60c653b73dc7dbef893c625c7c4"} Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.013750 4897 scope.go:117] "RemoveContainer" containerID="ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd" Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.082540 4897 scope.go:117] "RemoveContainer" containerID="3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961" Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.109536 4897 scope.go:117] "RemoveContainer" containerID="253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f" Feb 28 13:47:12 crc kubenswrapper[4897]: E0228 13:47:12.109959 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f\": container with ID starting with 253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f not found: ID does not exist" containerID="253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f" Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.109987 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f"} err="failed to get container status \"253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f\": rpc error: code = NotFound desc = could not find container \"253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f\": container with ID starting with 253dff6bb6e9f26ae6c1b73c156284687251ddb07a4d0c0cb76f12da83cc921f not found: ID does not exist" Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.110006 4897 scope.go:117] "RemoveContainer" containerID="ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd" Feb 28 13:47:12 crc kubenswrapper[4897]: E0228 13:47:12.110227 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd\": container with ID starting with ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd not found: ID does not exist" containerID="ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd" Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.110247 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd"} err="failed to get container status \"ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd\": rpc error: code = NotFound desc = could not find container \"ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd\": container with ID starting with ef8a9a61cecbca9f930f9fc3ee563b01e1e4d9d8842be13cb25a7e482841aacd not found: ID does not exist" Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.110261 4897 scope.go:117] "RemoveContainer" containerID="3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961" Feb 28 13:47:12 crc kubenswrapper[4897]: E0228 13:47:12.110462 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961\": container with ID starting with 3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961 not found: ID does not exist" containerID="3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961" Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.110489 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961"} err="failed to get container status \"3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961\": rpc error: code = NotFound desc = could not find container \"3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961\": container with ID starting with 3d7d4dc4862685c3fa0eeb072a8af3720a3e83137f697b0afb9516cd9e8b6961 not found: ID does not exist" Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.147505 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjhzq"] Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.156980 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjhzq"] Feb 28 13:47:12 crc kubenswrapper[4897]: I0228 13:47:12.466207 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" path="/var/lib/kubelet/pods/14af1d5e-f67c-4675-afb4-4aff4b78237c/volumes" Feb 28 13:47:13 crc kubenswrapper[4897]: E0228 13:47:13.459509 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.174510 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ccb976897-5x5vv"] Feb 28 13:47:16 crc kubenswrapper[4897]: E0228 13:47:16.175572 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerName="extract-content" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.175585 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerName="extract-content" Feb 28 13:47:16 crc kubenswrapper[4897]: E0228 13:47:16.175605 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerName="registry-server" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.175611 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerName="registry-server" Feb 28 13:47:16 crc kubenswrapper[4897]: E0228 13:47:16.175623 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerName="extract-utilities" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.175630 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerName="extract-utilities" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.175819 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="14af1d5e-f67c-4675-afb4-4aff4b78237c" containerName="registry-server" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.176790 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.179064 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.194788 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ccb976897-5x5vv"] Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.231465 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft6wd\" (UniqueName: \"kubernetes.io/projected/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-kube-api-access-ft6wd\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.231527 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-swift-storage-0\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.231583 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.231616 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-svc\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.231649 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-config\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.231735 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.231774 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.333945 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft6wd\" (UniqueName: \"kubernetes.io/projected/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-kube-api-access-ft6wd\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.334015 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-swift-storage-0\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.334073 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.334108 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-svc\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.334145 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-config\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.334199 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.334241 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.335132 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-swift-storage-0\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.335132 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.335279 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-svc\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.335580 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.335789 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.335872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-config\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.358134 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft6wd\" (UniqueName: \"kubernetes.io/projected/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-kube-api-access-ft6wd\") pod \"dnsmasq-dns-6ccb976897-5x5vv\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:16 crc kubenswrapper[4897]: I0228 13:47:16.509341 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:17 crc kubenswrapper[4897]: W0228 13:47:17.053498 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb9b6fb_aa7e_4a7a_b7e6_16e14e08341f.slice/crio-19c6bb1920b978d9aae6edc556549a3fec44e0b4f6b71e29ea6d706e8fcaf00e WatchSource:0}: Error finding container 19c6bb1920b978d9aae6edc556549a3fec44e0b4f6b71e29ea6d706e8fcaf00e: Status 404 returned error can't find the container with id 19c6bb1920b978d9aae6edc556549a3fec44e0b4f6b71e29ea6d706e8fcaf00e Feb 28 13:47:17 crc kubenswrapper[4897]: I0228 13:47:17.064982 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ccb976897-5x5vv"] Feb 28 13:47:18 crc kubenswrapper[4897]: I0228 13:47:18.063542 4897 generic.go:334] "Generic (PLEG): container finished" podID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" containerID="6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab" exitCode=0 Feb 28 13:47:18 crc kubenswrapper[4897]: I0228 13:47:18.063888 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" event={"ID":"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f","Type":"ContainerDied","Data":"6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab"} Feb 28 13:47:18 crc kubenswrapper[4897]: I0228 13:47:18.063919 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" event={"ID":"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f","Type":"ContainerStarted","Data":"19c6bb1920b978d9aae6edc556549a3fec44e0b4f6b71e29ea6d706e8fcaf00e"} Feb 28 13:47:18 crc kubenswrapper[4897]: I0228 13:47:18.456866 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:47:18 crc kubenswrapper[4897]: E0228 13:47:18.457482 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:47:19 crc kubenswrapper[4897]: I0228 13:47:19.083034 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" event={"ID":"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f","Type":"ContainerStarted","Data":"5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3"} Feb 28 13:47:19 crc kubenswrapper[4897]: I0228 13:47:19.083253 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:19 crc kubenswrapper[4897]: I0228 13:47:19.122964 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" podStartSLOduration=3.122932509 podStartE2EDuration="3.122932509s" podCreationTimestamp="2026-02-28 13:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:47:19.108384681 +0000 UTC m=+1853.350705378" watchObservedRunningTime="2026-02-28 13:47:19.122932509 +0000 UTC m=+1853.365253196" Feb 28 13:47:20 crc kubenswrapper[4897]: E0228 13:47:20.983533 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 13:47:20 crc kubenswrapper[4897]: E0228 13:47:20.984195 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/tls.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/tls.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nmmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(49ad0c65-4304-477c-8cfa-c344fcf2ab9b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:47:20 crc kubenswrapper[4897]: E0228 13:47:20.985508 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/ceilometer-0" podUID="49ad0c65-4304-477c-8cfa-c344fcf2ab9b" Feb 28 13:47:26 crc kubenswrapper[4897]: E0228 13:47:26.472099 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.510699 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.595248 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-596dcdd889-4frbq"] Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.595475 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" podUID="83003cdb-d775-4878-97e7-453c0a1f2ae5" containerName="dnsmasq-dns" containerID="cri-o://de6f36f06dbf82c91ba641a34e18c8dab09d82cbb4465f7b11b64ce8b3ff2c41" gracePeriod=10 Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.730232 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557fbb6cc7-qchzg"] Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.732418 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.762556 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-ovsdbserver-nb\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.762650 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-dns-svc\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.762702 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-dns-swift-storage-0\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.762722 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.762738 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-config\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.762763 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-ovsdbserver-sb\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.762796 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdzvn\" (UniqueName: \"kubernetes.io/projected/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-kube-api-access-hdzvn\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.805934 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557fbb6cc7-qchzg"] Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.866558 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-dns-svc\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.866627 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-dns-swift-storage-0\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.866647 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.866667 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-config\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.866692 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-ovsdbserver-sb\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.866726 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdzvn\" (UniqueName: \"kubernetes.io/projected/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-kube-api-access-hdzvn\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.866792 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-ovsdbserver-nb\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.867570 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-ovsdbserver-nb\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.868071 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-dns-svc\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.869015 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-config\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.870985 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-ovsdbserver-sb\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.872730 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-dns-swift-storage-0\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.873929 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:26 crc kubenswrapper[4897]: I0228 13:47:26.893175 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdzvn\" (UniqueName: \"kubernetes.io/projected/9045e426-bdc0-4327-8c53-1f3e64d1e3a2-kube-api-access-hdzvn\") pod \"dnsmasq-dns-557fbb6cc7-qchzg\" (UID: \"9045e426-bdc0-4327-8c53-1f3e64d1e3a2\") " pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.096363 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5h5dn"] Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.112710 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.124078 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5h5dn"] Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.198466 4897 generic.go:334] "Generic (PLEG): container finished" podID="83003cdb-d775-4878-97e7-453c0a1f2ae5" containerID="de6f36f06dbf82c91ba641a34e18c8dab09d82cbb4465f7b11b64ce8b3ff2c41" exitCode=0 Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.198505 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" event={"ID":"83003cdb-d775-4878-97e7-453c0a1f2ae5","Type":"ContainerDied","Data":"de6f36f06dbf82c91ba641a34e18c8dab09d82cbb4465f7b11b64ce8b3ff2c41"} Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.198530 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" event={"ID":"83003cdb-d775-4878-97e7-453c0a1f2ae5","Type":"ContainerDied","Data":"c573087abf674e24122467d3453d444f224898b935eab83d92f9fc5f3663573c"} Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.198539 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c573087abf674e24122467d3453d444f224898b935eab83d92f9fc5f3663573c" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.252249 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.382114 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-sb\") pod \"83003cdb-d775-4878-97e7-453c0a1f2ae5\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.382450 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-849kb\" (UniqueName: \"kubernetes.io/projected/83003cdb-d775-4878-97e7-453c0a1f2ae5-kube-api-access-849kb\") pod \"83003cdb-d775-4878-97e7-453c0a1f2ae5\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.382488 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-nb\") pod \"83003cdb-d775-4878-97e7-453c0a1f2ae5\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.382550 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-config\") pod \"83003cdb-d775-4878-97e7-453c0a1f2ae5\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.382584 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-svc\") pod \"83003cdb-d775-4878-97e7-453c0a1f2ae5\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.382687 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-swift-storage-0\") pod \"83003cdb-d775-4878-97e7-453c0a1f2ae5\" (UID: \"83003cdb-d775-4878-97e7-453c0a1f2ae5\") " Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.391042 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83003cdb-d775-4878-97e7-453c0a1f2ae5-kube-api-access-849kb" (OuterVolumeSpecName: "kube-api-access-849kb") pod "83003cdb-d775-4878-97e7-453c0a1f2ae5" (UID: "83003cdb-d775-4878-97e7-453c0a1f2ae5"). InnerVolumeSpecName "kube-api-access-849kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.433077 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-config" (OuterVolumeSpecName: "config") pod "83003cdb-d775-4878-97e7-453c0a1f2ae5" (UID: "83003cdb-d775-4878-97e7-453c0a1f2ae5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.454122 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "83003cdb-d775-4878-97e7-453c0a1f2ae5" (UID: "83003cdb-d775-4878-97e7-453c0a1f2ae5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.459891 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "83003cdb-d775-4878-97e7-453c0a1f2ae5" (UID: "83003cdb-d775-4878-97e7-453c0a1f2ae5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.485978 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.486011 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.486020 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-849kb\" (UniqueName: \"kubernetes.io/projected/83003cdb-d775-4878-97e7-453c0a1f2ae5-kube-api-access-849kb\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.486032 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.487232 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83003cdb-d775-4878-97e7-453c0a1f2ae5" (UID: "83003cdb-d775-4878-97e7-453c0a1f2ae5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.494809 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "83003cdb-d775-4878-97e7-453c0a1f2ae5" (UID: "83003cdb-d775-4878-97e7-453c0a1f2ae5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.589017 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.589066 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83003cdb-d775-4878-97e7-453c0a1f2ae5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:27 crc kubenswrapper[4897]: I0228 13:47:27.660533 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557fbb6cc7-qchzg"] Feb 28 13:47:27 crc kubenswrapper[4897]: W0228 13:47:27.660580 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9045e426_bdc0_4327_8c53_1f3e64d1e3a2.slice/crio-71218c2fa3cdcddc01d7ecb870eb02b653764a051007665e69b782929e96988a WatchSource:0}: Error finding container 71218c2fa3cdcddc01d7ecb870eb02b653764a051007665e69b782929e96988a: Status 404 returned error can't find the container with id 71218c2fa3cdcddc01d7ecb870eb02b653764a051007665e69b782929e96988a Feb 28 13:47:28 crc kubenswrapper[4897]: I0228 13:47:28.213435 4897 generic.go:334] "Generic (PLEG): container finished" podID="9045e426-bdc0-4327-8c53-1f3e64d1e3a2" containerID="72b02b713eb8f4d5dbd38b8a54a05e2e3d27756cee13efb090a1488ced9350f3" exitCode=0 Feb 28 13:47:28 crc kubenswrapper[4897]: I0228 13:47:28.213537 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" event={"ID":"9045e426-bdc0-4327-8c53-1f3e64d1e3a2","Type":"ContainerDied","Data":"72b02b713eb8f4d5dbd38b8a54a05e2e3d27756cee13efb090a1488ced9350f3"} Feb 28 13:47:28 crc kubenswrapper[4897]: I0228 13:47:28.213630 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" event={"ID":"9045e426-bdc0-4327-8c53-1f3e64d1e3a2","Type":"ContainerStarted","Data":"71218c2fa3cdcddc01d7ecb870eb02b653764a051007665e69b782929e96988a"} Feb 28 13:47:28 crc kubenswrapper[4897]: I0228 13:47:28.213712 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596dcdd889-4frbq" Feb 28 13:47:28 crc kubenswrapper[4897]: I0228 13:47:28.411824 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-596dcdd889-4frbq"] Feb 28 13:47:28 crc kubenswrapper[4897]: I0228 13:47:28.420677 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-596dcdd889-4frbq"] Feb 28 13:47:28 crc kubenswrapper[4897]: I0228 13:47:28.479507 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83003cdb-d775-4878-97e7-453c0a1f2ae5" path="/var/lib/kubelet/pods/83003cdb-d775-4878-97e7-453c0a1f2ae5/volumes" Feb 28 13:47:28 crc kubenswrapper[4897]: I0228 13:47:28.480981 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9f9d43-6498-42ee-a72c-e88395991277" path="/var/lib/kubelet/pods/9e9f9d43-6498-42ee-a72c-e88395991277/volumes" Feb 28 13:47:29 crc kubenswrapper[4897]: I0228 13:47:29.226128 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" event={"ID":"9045e426-bdc0-4327-8c53-1f3e64d1e3a2","Type":"ContainerStarted","Data":"824e5ab65014c1bd96ca28ea901033810a75e354a37619f1153b738aed5b51d2"} Feb 28 13:47:29 crc kubenswrapper[4897]: I0228 13:47:29.226369 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:29 crc kubenswrapper[4897]: I0228 13:47:29.251030 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" podStartSLOduration=3.25100969 podStartE2EDuration="3.25100969s" podCreationTimestamp="2026-02-28 13:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:47:29.246736117 +0000 UTC m=+1863.489056794" watchObservedRunningTime="2026-02-28 13:47:29.25100969 +0000 UTC m=+1863.493330347" Feb 28 13:47:31 crc kubenswrapper[4897]: I0228 13:47:31.456669 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:47:31 crc kubenswrapper[4897]: E0228 13:47:31.457132 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.047419 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-5h967"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.062596 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xtnhx"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.078216 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tklpd"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.088250 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d935-account-create-update-l5hzg"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.095796 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3c48-account-create-update-58hww"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.102795 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-5h967"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.110124 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xtnhx"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.117113 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tklpd"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.124490 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d935-account-create-update-l5hzg"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.132192 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3c48-account-create-update-58hww"] Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.474579 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176faf0b-5f7d-450b-871e-9d5df2595562" path="/var/lib/kubelet/pods/176faf0b-5f7d-450b-871e-9d5df2595562/volumes" Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.477502 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="979b68d5-45f7-4a2d-aae1-0e93d2de732e" path="/var/lib/kubelet/pods/979b68d5-45f7-4a2d-aae1-0e93d2de732e/volumes" Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.478883 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d754bb18-6ebe-445e-8826-53d247030dc7" path="/var/lib/kubelet/pods/d754bb18-6ebe-445e-8826-53d247030dc7/volumes" Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.480443 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f6c67d-0efe-493c-9e09-781291a958cd" path="/var/lib/kubelet/pods/f3f6c67d-0efe-493c-9e09-781291a958cd/volumes" Feb 28 13:47:34 crc kubenswrapper[4897]: I0228 13:47:34.483016 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa6dea0-3e57-4736-9a58-3885f7a30f18" path="/var/lib/kubelet/pods/faa6dea0-3e57-4736-9a58-3885f7a30f18/volumes" Feb 28 13:47:36 crc kubenswrapper[4897]: I0228 13:47:36.042588 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gf6hc"] Feb 28 13:47:36 crc kubenswrapper[4897]: I0228 13:47:36.062425 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-aec9-account-create-update-d7fk5"] Feb 28 13:47:36 crc kubenswrapper[4897]: I0228 13:47:36.074620 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gf6hc"] Feb 28 13:47:36 crc kubenswrapper[4897]: I0228 13:47:36.084569 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-aec9-account-create-update-d7fk5"] Feb 28 13:47:36 crc kubenswrapper[4897]: E0228 13:47:36.471142 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="49ad0c65-4304-477c-8cfa-c344fcf2ab9b" Feb 28 13:47:36 crc kubenswrapper[4897]: I0228 13:47:36.474032 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8e5b42-44ea-4242-b46b-77efc3fb0826" path="/var/lib/kubelet/pods/9d8e5b42-44ea-4242-b46b-77efc3fb0826/volumes" Feb 28 13:47:36 crc kubenswrapper[4897]: I0228 13:47:36.476274 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42" path="/var/lib/kubelet/pods/e1f06e40-0ffb-4bdc-98ea-e6d44c5d8e42/volumes" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.114606 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557fbb6cc7-qchzg" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.216041 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ccb976897-5x5vv"] Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.216356 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" podUID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" containerName="dnsmasq-dns" containerID="cri-o://5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3" gracePeriod=10 Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.732407 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.839788 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-openstack-edpm-ipam\") pod \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.839839 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft6wd\" (UniqueName: \"kubernetes.io/projected/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-kube-api-access-ft6wd\") pod \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.839873 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-nb\") pod \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.839931 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-sb\") pod \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.840026 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-svc\") pod \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.840076 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-config\") pod \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.840265 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-swift-storage-0\") pod \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\" (UID: \"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f\") " Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.869417 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-kube-api-access-ft6wd" (OuterVolumeSpecName: "kube-api-access-ft6wd") pod "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" (UID: "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f"). InnerVolumeSpecName "kube-api-access-ft6wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.904336 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" (UID: "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.915511 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" (UID: "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.918918 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" (UID: "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.926735 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-config" (OuterVolumeSpecName: "config") pod "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" (UID: "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.927603 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" (UID: "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.932728 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" (UID: "3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.942562 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.942595 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-config\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.942605 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.942618 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.942630 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft6wd\" (UniqueName: \"kubernetes.io/projected/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-kube-api-access-ft6wd\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.942638 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:37 crc kubenswrapper[4897]: I0228 13:47:37.942646 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.341196 4897 generic.go:334] "Generic (PLEG): container finished" podID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" containerID="5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3" exitCode=0 Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.341236 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" event={"ID":"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f","Type":"ContainerDied","Data":"5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3"} Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.341268 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" event={"ID":"3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f","Type":"ContainerDied","Data":"19c6bb1920b978d9aae6edc556549a3fec44e0b4f6b71e29ea6d706e8fcaf00e"} Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.341285 4897 scope.go:117] "RemoveContainer" containerID="5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3" Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.341280 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ccb976897-5x5vv" Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.415140 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ccb976897-5x5vv"] Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.416036 4897 scope.go:117] "RemoveContainer" containerID="6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab" Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.424417 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ccb976897-5x5vv"] Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.447147 4897 scope.go:117] "RemoveContainer" containerID="5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3" Feb 28 13:47:38 crc kubenswrapper[4897]: E0228 13:47:38.447527 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3\": container with ID starting with 5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3 not found: ID does not exist" containerID="5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3" Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.447563 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3"} err="failed to get container status \"5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3\": rpc error: code = NotFound desc = could not find container \"5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3\": container with ID starting with 5ca09233ab3a42b95bb90060f2bc29bfc5128dd79c14d9c9f8e38935b6608ce3 not found: ID does not exist" Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.447587 4897 scope.go:117] "RemoveContainer" containerID="6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab" Feb 28 13:47:38 crc kubenswrapper[4897]: E0228 13:47:38.447798 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab\": container with ID starting with 6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab not found: ID does not exist" containerID="6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab" Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.447824 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab"} err="failed to get container status \"6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab\": rpc error: code = NotFound desc = could not find container \"6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab\": container with ID starting with 6581b1b98fef6b4e6dd2f4b51e3926c0c5e2aa432f5ad51b0d7a353dd20d08ab not found: ID does not exist" Feb 28 13:47:38 crc kubenswrapper[4897]: I0228 13:47:38.468128 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" path="/var/lib/kubelet/pods/3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f/volumes" Feb 28 13:47:41 crc kubenswrapper[4897]: E0228 13:47:41.458289 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:47:42 crc kubenswrapper[4897]: I0228 13:47:42.034914 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8ddqn"] Feb 28 13:47:42 crc kubenswrapper[4897]: I0228 13:47:42.048920 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8ddqn"] Feb 28 13:47:42 crc kubenswrapper[4897]: I0228 13:47:42.479252 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041df086-096c-4dd1-9e4e-d06a2051084c" path="/var/lib/kubelet/pods/041df086-096c-4dd1-9e4e-d06a2051084c/volumes" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.229665 4897 scope.go:117] "RemoveContainer" containerID="3d6e1fd1e3aa83214803795990dfb91e73e93414fd0c78f76546a886bffe3650" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.268813 4897 scope.go:117] "RemoveContainer" containerID="2654289e63af92b0a9404fbcb959747e21fb74bd21d9959dc9b4fd19a52623aa" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.336415 4897 scope.go:117] "RemoveContainer" containerID="158c1b04885211c76f210e7f30c77ce9e50d95266202b2dedcf44d02ddbea3a7" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.400570 4897 scope.go:117] "RemoveContainer" containerID="054dce8b30edb292831f98a5cfee5d3dffb5788c55d5f1d717ac7e3b40882bdc" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.503443 4897 scope.go:117] "RemoveContainer" containerID="274c33b5a9c838c934dc6102615a32c67ab5aacf47b191840e62573a571f7cb9" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.536345 4897 scope.go:117] "RemoveContainer" containerID="eaf6b36b47c230f9601ab79a463ca9e43223cea6201564669a3581ebfb31f9f2" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.573617 4897 scope.go:117] "RemoveContainer" containerID="de6f36f06dbf82c91ba641a34e18c8dab09d82cbb4465f7b11b64ce8b3ff2c41" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.608412 4897 scope.go:117] "RemoveContainer" containerID="432bfebc64b1798bae6ab76386d8d01deaa11e46a4fcad025b4efebfacf11d97" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.634932 4897 scope.go:117] "RemoveContainer" containerID="6bfed7904117d7ea1ca961d5ee28a4cbd4c6444fce1dfac7b34f745cbde857d6" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.658404 4897 scope.go:117] "RemoveContainer" containerID="1e6d8dcf42007574e0c00f378d7ed248461634e59db3d46aa4f6565e590372f0" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.677925 4897 scope.go:117] "RemoveContainer" containerID="8e7f47d41ff2ce80d174e90b3f7e1a3208c77732bb9a7483eb927314057697c1" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.699627 4897 scope.go:117] "RemoveContainer" containerID="47abe7b299bafd64d8090fc91f1637586bf96ff210b53ecb9974158920438cd5" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.778450 4897 scope.go:117] "RemoveContainer" containerID="92d3fd9ca97ff08c5490f83032c9ea110962c3083b40fbe48cca52f134d24e27" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.807159 4897 scope.go:117] "RemoveContainer" containerID="8a1c6ca9133cd43b4cd58f386b33a8fd2276468706580142c148e7b3d3b6d5b3" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.832513 4897 scope.go:117] "RemoveContainer" containerID="a63886b04d33ffa5a7d19c5ac97da96890acb97e7dfaee9e83ca38db369e8e9f" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.851618 4897 scope.go:117] "RemoveContainer" containerID="b613a3e718788c9ba75b9d22df94a16bbd693c7834445b47da1ff794a1acc177" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.872119 4897 scope.go:117] "RemoveContainer" containerID="fdcd573f4b85fd76dcd6c1196e79a568976e7ef69af7e559867f25fe6d4e79e9" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.899914 4897 scope.go:117] "RemoveContainer" containerID="dd1a5b5aae164239eb2bdc0c49cfc377eee131fb7e96723e40a1d5355b820901" Feb 28 13:47:43 crc kubenswrapper[4897]: I0228 13:47:43.918361 4897 scope.go:117] "RemoveContainer" containerID="3ddbbb30e70a2991fd638ecc8a7d5d1ef9c51891a0633f0e5eacc86d16ab1d74" Feb 28 13:47:44 crc kubenswrapper[4897]: I0228 13:47:44.408188 4897 generic.go:334] "Generic (PLEG): container finished" podID="59883b9c-0fbf-4d9e-84ee-f9456a6f13aa" containerID="3c1e1eca2e1e637f8be1ac0febf21565be00a60c653b73dc7dbef893c625c7c4" exitCode=0 Feb 28 13:47:44 crc kubenswrapper[4897]: I0228 13:47:44.408262 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa","Type":"ContainerDied","Data":"3c1e1eca2e1e637f8be1ac0febf21565be00a60c653b73dc7dbef893c625c7c4"} Feb 28 13:47:44 crc kubenswrapper[4897]: I0228 13:47:44.411989 4897 generic.go:334] "Generic (PLEG): container finished" podID="0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd" containerID="1633921c84deff1591c31d7fb3fbc877d36bb21584568838b8b22f705962a99e" exitCode=0 Feb 28 13:47:44 crc kubenswrapper[4897]: I0228 13:47:44.412051 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd","Type":"ContainerDied","Data":"1633921c84deff1591c31d7fb3fbc877d36bb21584568838b8b22f705962a99e"} Feb 28 13:47:45 crc kubenswrapper[4897]: I0228 13:47:45.424596 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59883b9c-0fbf-4d9e-84ee-f9456a6f13aa","Type":"ContainerStarted","Data":"6804dafdd4aa8d89e9858133f3ec5d0bb3ef2012cf103afb6057cb861b2e24f3"} Feb 28 13:47:45 crc kubenswrapper[4897]: I0228 13:47:45.425495 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:45 crc kubenswrapper[4897]: I0228 13:47:45.426801 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd","Type":"ContainerStarted","Data":"0d74699a67cb9151900bd6255c2dca467edf57ef15b1ebc46ad3005dcd83956e"} Feb 28 13:47:45 crc kubenswrapper[4897]: I0228 13:47:45.427018 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 28 13:47:45 crc kubenswrapper[4897]: I0228 13:47:45.455366 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.45534385 podStartE2EDuration="37.45534385s" podCreationTimestamp="2026-02-28 13:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:47:45.448367929 +0000 UTC m=+1879.690688596" watchObservedRunningTime="2026-02-28 13:47:45.45534385 +0000 UTC m=+1879.697664517" Feb 28 13:47:45 crc kubenswrapper[4897]: I0228 13:47:45.456800 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:47:45 crc kubenswrapper[4897]: E0228 13:47:45.457058 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:47:45 crc kubenswrapper[4897]: I0228 13:47:45.490250 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.490229023 podStartE2EDuration="37.490229023s" podCreationTimestamp="2026-02-28 13:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 13:47:45.485392364 +0000 UTC m=+1879.727713031" watchObservedRunningTime="2026-02-28 13:47:45.490229023 +0000 UTC m=+1879.732549680" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.352556 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h"] Feb 28 13:47:55 crc kubenswrapper[4897]: E0228 13:47:55.353390 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83003cdb-d775-4878-97e7-453c0a1f2ae5" containerName="init" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.353403 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="83003cdb-d775-4878-97e7-453c0a1f2ae5" containerName="init" Feb 28 13:47:55 crc kubenswrapper[4897]: E0228 13:47:55.353411 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" containerName="dnsmasq-dns" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.353417 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" containerName="dnsmasq-dns" Feb 28 13:47:55 crc kubenswrapper[4897]: E0228 13:47:55.353429 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83003cdb-d775-4878-97e7-453c0a1f2ae5" containerName="dnsmasq-dns" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.353434 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="83003cdb-d775-4878-97e7-453c0a1f2ae5" containerName="dnsmasq-dns" Feb 28 13:47:55 crc kubenswrapper[4897]: E0228 13:47:55.353467 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" containerName="init" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.353476 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" containerName="init" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.353636 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb9b6fb-aa7e-4a7a-b7e6-16e14e08341f" containerName="dnsmasq-dns" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.353664 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="83003cdb-d775-4878-97e7-453c0a1f2ae5" containerName="dnsmasq-dns" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.354531 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.357834 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.357978 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.358106 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.358140 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.366291 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h"] Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.522502 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.522590 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddq2\" (UniqueName: \"kubernetes.io/projected/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-kube-api-access-mddq2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.523011 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.523142 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.624943 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.625034 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.625105 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddq2\" (UniqueName: \"kubernetes.io/projected/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-kube-api-access-mddq2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.625138 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.632648 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.633262 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.633380 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.658380 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddq2\" (UniqueName: \"kubernetes.io/projected/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-kube-api-access-mddq2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:55 crc kubenswrapper[4897]: I0228 13:47:55.698233 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:47:56 crc kubenswrapper[4897]: E0228 13:47:56.470205 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:47:56 crc kubenswrapper[4897]: I0228 13:47:56.492678 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h"] Feb 28 13:47:56 crc kubenswrapper[4897]: I0228 13:47:56.557541 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" event={"ID":"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3","Type":"ContainerStarted","Data":"09cd74b0aa61535d900afbd15ae9ac7dac7bb6c738b1f156902210bc7c389095"} Feb 28 13:47:58 crc kubenswrapper[4897]: I0228 13:47:58.068920 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-ckz4d"] Feb 28 13:47:58 crc kubenswrapper[4897]: I0228 13:47:58.082195 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-ckz4d"] Feb 28 13:47:58 crc kubenswrapper[4897]: I0228 13:47:58.471422 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd3841c-39bf-454c-88de-5156d769cf7e" path="/var/lib/kubelet/pods/bfd3841c-39bf-454c-88de-5156d769cf7e/volumes" Feb 28 13:47:58 crc kubenswrapper[4897]: I0228 13:47:58.691642 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.240:5671: connect: connection refused" Feb 28 13:47:59 crc kubenswrapper[4897]: I0228 13:47:59.348541 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 28 13:47:59 crc kubenswrapper[4897]: I0228 13:47:59.458784 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:47:59 crc kubenswrapper[4897]: E0228 13:47:59.459760 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.152650 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538108-57hh6"] Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.153867 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538108-57hh6" Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.156622 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.157374 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.157802 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.165461 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538108-57hh6"] Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.324800 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltqkq\" (UniqueName: \"kubernetes.io/projected/b29bcedc-1106-4d1a-b5d9-af0aa213a88e-kube-api-access-ltqkq\") pod \"auto-csr-approver-29538108-57hh6\" (UID: \"b29bcedc-1106-4d1a-b5d9-af0aa213a88e\") " pod="openshift-infra/auto-csr-approver-29538108-57hh6" Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.427150 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltqkq\" (UniqueName: \"kubernetes.io/projected/b29bcedc-1106-4d1a-b5d9-af0aa213a88e-kube-api-access-ltqkq\") pod \"auto-csr-approver-29538108-57hh6\" (UID: \"b29bcedc-1106-4d1a-b5d9-af0aa213a88e\") " pod="openshift-infra/auto-csr-approver-29538108-57hh6" Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.453375 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltqkq\" (UniqueName: \"kubernetes.io/projected/b29bcedc-1106-4d1a-b5d9-af0aa213a88e-kube-api-access-ltqkq\") pod \"auto-csr-approver-29538108-57hh6\" (UID: \"b29bcedc-1106-4d1a-b5d9-af0aa213a88e\") " pod="openshift-infra/auto-csr-approver-29538108-57hh6" Feb 28 13:48:00 crc kubenswrapper[4897]: I0228 13:48:00.488497 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538108-57hh6" Feb 28 13:48:07 crc kubenswrapper[4897]: E0228 13:48:07.457234 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:48:07 crc kubenswrapper[4897]: I0228 13:48:07.555563 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538108-57hh6"] Feb 28 13:48:07 crc kubenswrapper[4897]: I0228 13:48:07.698840 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" event={"ID":"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3","Type":"ContainerStarted","Data":"b3e49304aa24a32878c1c9c57284fc8c7aa17e3f45bf3c68487be9bb36e95534"} Feb 28 13:48:07 crc kubenswrapper[4897]: I0228 13:48:07.710179 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538108-57hh6" event={"ID":"b29bcedc-1106-4d1a-b5d9-af0aa213a88e","Type":"ContainerStarted","Data":"62324a457896fd56d08247723d60f75413a84e6cbba5dfcaca0958b3e0e19c3f"} Feb 28 13:48:07 crc kubenswrapper[4897]: I0228 13:48:07.724722 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" podStartSLOduration=2.125196268 podStartE2EDuration="12.724704008s" podCreationTimestamp="2026-02-28 13:47:55 +0000 UTC" firstStartedPulling="2026-02-28 13:47:56.509988271 +0000 UTC m=+1890.752308918" lastFinishedPulling="2026-02-28 13:48:07.109496001 +0000 UTC m=+1901.351816658" observedRunningTime="2026-02-28 13:48:07.718812128 +0000 UTC m=+1901.961132775" watchObservedRunningTime="2026-02-28 13:48:07.724704008 +0000 UTC m=+1901.967024675" Feb 28 13:48:08 crc kubenswrapper[4897]: I0228 13:48:08.693622 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 28 13:48:09 crc kubenswrapper[4897]: I0228 13:48:09.751653 4897 generic.go:334] "Generic (PLEG): container finished" podID="b29bcedc-1106-4d1a-b5d9-af0aa213a88e" containerID="83e64d8b70cfaa6eeb51d930bb1e1ba871717813b303bea4f44c427d0102ad13" exitCode=0 Feb 28 13:48:09 crc kubenswrapper[4897]: I0228 13:48:09.751766 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538108-57hh6" event={"ID":"b29bcedc-1106-4d1a-b5d9-af0aa213a88e","Type":"ContainerDied","Data":"83e64d8b70cfaa6eeb51d930bb1e1ba871717813b303bea4f44c427d0102ad13"} Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.199715 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538108-57hh6" Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.321116 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltqkq\" (UniqueName: \"kubernetes.io/projected/b29bcedc-1106-4d1a-b5d9-af0aa213a88e-kube-api-access-ltqkq\") pod \"b29bcedc-1106-4d1a-b5d9-af0aa213a88e\" (UID: \"b29bcedc-1106-4d1a-b5d9-af0aa213a88e\") " Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.327671 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29bcedc-1106-4d1a-b5d9-af0aa213a88e-kube-api-access-ltqkq" (OuterVolumeSpecName: "kube-api-access-ltqkq") pod "b29bcedc-1106-4d1a-b5d9-af0aa213a88e" (UID: "b29bcedc-1106-4d1a-b5d9-af0aa213a88e"). InnerVolumeSpecName "kube-api-access-ltqkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.423686 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltqkq\" (UniqueName: \"kubernetes.io/projected/b29bcedc-1106-4d1a-b5d9-af0aa213a88e-kube-api-access-ltqkq\") on node \"crc\" DevicePath \"\"" Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.456641 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.770639 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538108-57hh6" event={"ID":"b29bcedc-1106-4d1a-b5d9-af0aa213a88e","Type":"ContainerDied","Data":"62324a457896fd56d08247723d60f75413a84e6cbba5dfcaca0958b3e0e19c3f"} Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.770933 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62324a457896fd56d08247723d60f75413a84e6cbba5dfcaca0958b3e0e19c3f" Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.770667 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538108-57hh6" Feb 28 13:48:11 crc kubenswrapper[4897]: I0228 13:48:11.774727 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"68cbc528fc9ee62676935060f8ad57ccdbb15ff6bc6647175367c2eeaa5ffc16"} Feb 28 13:48:12 crc kubenswrapper[4897]: I0228 13:48:12.277711 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538102-l46vx"] Feb 28 13:48:12 crc kubenswrapper[4897]: I0228 13:48:12.290209 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538102-l46vx"] Feb 28 13:48:12 crc kubenswrapper[4897]: I0228 13:48:12.473284 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fa54bd-caa0-4a38-a45b-a5e6646e3843" path="/var/lib/kubelet/pods/c0fa54bd-caa0-4a38-a45b-a5e6646e3843/volumes" Feb 28 13:48:17 crc kubenswrapper[4897]: I0228 13:48:17.037753 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qc9bp"] Feb 28 13:48:17 crc kubenswrapper[4897]: I0228 13:48:17.059031 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qc9bp"] Feb 28 13:48:17 crc kubenswrapper[4897]: I0228 13:48:17.840659 4897 generic.go:334] "Generic (PLEG): container finished" podID="3ec9b581-f18e-4ae6-b520-c19ecfc75ab3" containerID="b3e49304aa24a32878c1c9c57284fc8c7aa17e3f45bf3c68487be9bb36e95534" exitCode=0 Feb 28 13:48:17 crc kubenswrapper[4897]: I0228 13:48:17.840741 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" event={"ID":"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3","Type":"ContainerDied","Data":"b3e49304aa24a32878c1c9c57284fc8c7aa17e3f45bf3c68487be9bb36e95534"} Feb 28 13:48:18 crc kubenswrapper[4897]: I0228 13:48:18.468570 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d7bd5a-52d0-4a8f-bd1d-542a957d815f" path="/var/lib/kubelet/pods/99d7bd5a-52d0-4a8f-bd1d-542a957d815f/volumes" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.367887 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:48:19 crc kubenswrapper[4897]: E0228 13:48:19.458980 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.506361 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-repo-setup-combined-ca-bundle\") pod \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.506488 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-inventory\") pod \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.506518 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-ssh-key-openstack-edpm-ipam\") pod \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.506727 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mddq2\" (UniqueName: \"kubernetes.io/projected/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-kube-api-access-mddq2\") pod \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\" (UID: \"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3\") " Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.511935 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3ec9b581-f18e-4ae6-b520-c19ecfc75ab3" (UID: "3ec9b581-f18e-4ae6-b520-c19ecfc75ab3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.514654 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-kube-api-access-mddq2" (OuterVolumeSpecName: "kube-api-access-mddq2") pod "3ec9b581-f18e-4ae6-b520-c19ecfc75ab3" (UID: "3ec9b581-f18e-4ae6-b520-c19ecfc75ab3"). InnerVolumeSpecName "kube-api-access-mddq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.543556 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-inventory" (OuterVolumeSpecName: "inventory") pod "3ec9b581-f18e-4ae6-b520-c19ecfc75ab3" (UID: "3ec9b581-f18e-4ae6-b520-c19ecfc75ab3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.550405 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ec9b581-f18e-4ae6-b520-c19ecfc75ab3" (UID: "3ec9b581-f18e-4ae6-b520-c19ecfc75ab3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.608720 4897 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.608751 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.608761 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.608772 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mddq2\" (UniqueName: \"kubernetes.io/projected/3ec9b581-f18e-4ae6-b520-c19ecfc75ab3-kube-api-access-mddq2\") on node \"crc\" DevicePath \"\"" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.868896 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" event={"ID":"3ec9b581-f18e-4ae6-b520-c19ecfc75ab3","Type":"ContainerDied","Data":"09cd74b0aa61535d900afbd15ae9ac7dac7bb6c738b1f156902210bc7c389095"} Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.868940 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09cd74b0aa61535d900afbd15ae9ac7dac7bb6c738b1f156902210bc7c389095" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.869660 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.957927 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw"] Feb 28 13:48:19 crc kubenswrapper[4897]: E0228 13:48:19.958549 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec9b581-f18e-4ae6-b520-c19ecfc75ab3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.958575 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec9b581-f18e-4ae6-b520-c19ecfc75ab3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 13:48:19 crc kubenswrapper[4897]: E0228 13:48:19.958598 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29bcedc-1106-4d1a-b5d9-af0aa213a88e" containerName="oc" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.958608 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29bcedc-1106-4d1a-b5d9-af0aa213a88e" containerName="oc" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.958879 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29bcedc-1106-4d1a-b5d9-af0aa213a88e" containerName="oc" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.958907 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec9b581-f18e-4ae6-b520-c19ecfc75ab3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.959878 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.962838 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.962860 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.962999 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.963526 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:48:19 crc kubenswrapper[4897]: I0228 13:48:19.975080 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw"] Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.016293 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.016390 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.016439 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v72w9\" (UniqueName: \"kubernetes.io/projected/368bd0f8-b828-44ed-a605-3aabab81c9c1-kube-api-access-v72w9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.117650 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.117721 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.117784 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v72w9\" (UniqueName: \"kubernetes.io/projected/368bd0f8-b828-44ed-a605-3aabab81c9c1-kube-api-access-v72w9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.121800 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.122068 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.137410 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v72w9\" (UniqueName: \"kubernetes.io/projected/368bd0f8-b828-44ed-a605-3aabab81c9c1-kube-api-access-v72w9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t9clw\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.294120 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:20 crc kubenswrapper[4897]: W0228 13:48:20.891515 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod368bd0f8_b828_44ed_a605_3aabab81c9c1.slice/crio-d77c2bcba17e239472122b32d0b30643d99f0b3fc2440de5d9f01e5ced403df2 WatchSource:0}: Error finding container d77c2bcba17e239472122b32d0b30643d99f0b3fc2440de5d9f01e5ced403df2: Status 404 returned error can't find the container with id d77c2bcba17e239472122b32d0b30643d99f0b3fc2440de5d9f01e5ced403df2 Feb 28 13:48:20 crc kubenswrapper[4897]: I0228 13:48:20.897557 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw"] Feb 28 13:48:21 crc kubenswrapper[4897]: I0228 13:48:21.887174 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" event={"ID":"368bd0f8-b828-44ed-a605-3aabab81c9c1","Type":"ContainerStarted","Data":"e24ddc4064640dd3188df9b322e6f086577574068a126bd0fe16474d36c3499d"} Feb 28 13:48:21 crc kubenswrapper[4897]: I0228 13:48:21.887750 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" event={"ID":"368bd0f8-b828-44ed-a605-3aabab81c9c1","Type":"ContainerStarted","Data":"d77c2bcba17e239472122b32d0b30643d99f0b3fc2440de5d9f01e5ced403df2"} Feb 28 13:48:21 crc kubenswrapper[4897]: I0228 13:48:21.925067 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" podStartSLOduration=2.542220512 podStartE2EDuration="2.925040854s" podCreationTimestamp="2026-02-28 13:48:19 +0000 UTC" firstStartedPulling="2026-02-28 13:48:20.894157701 +0000 UTC m=+1915.136478358" lastFinishedPulling="2026-02-28 13:48:21.276978043 +0000 UTC m=+1915.519298700" observedRunningTime="2026-02-28 13:48:21.899218472 +0000 UTC m=+1916.141539139" watchObservedRunningTime="2026-02-28 13:48:21.925040854 +0000 UTC m=+1916.167361521" Feb 28 13:48:23 crc kubenswrapper[4897]: I0228 13:48:23.925349 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ad0c65-4304-477c-8cfa-c344fcf2ab9b","Type":"ContainerStarted","Data":"67ac656f9c0db7d22096f6643574a2c1c71ffc4363193383051875b9de188017"} Feb 28 13:48:23 crc kubenswrapper[4897]: I0228 13:48:23.927276 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 13:48:23 crc kubenswrapper[4897]: I0228 13:48:23.958152 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.666278501 podStartE2EDuration="1m21.958137845s" podCreationTimestamp="2026-02-28 13:47:02 +0000 UTC" firstStartedPulling="2026-02-28 13:47:03.73069918 +0000 UTC m=+1837.973019837" lastFinishedPulling="2026-02-28 13:48:23.022558514 +0000 UTC m=+1917.264879181" observedRunningTime="2026-02-28 13:48:23.952604826 +0000 UTC m=+1918.194925483" watchObservedRunningTime="2026-02-28 13:48:23.958137845 +0000 UTC m=+1918.200458502" Feb 28 13:48:24 crc kubenswrapper[4897]: I0228 13:48:24.943737 4897 generic.go:334] "Generic (PLEG): container finished" podID="368bd0f8-b828-44ed-a605-3aabab81c9c1" containerID="e24ddc4064640dd3188df9b322e6f086577574068a126bd0fe16474d36c3499d" exitCode=0 Feb 28 13:48:24 crc kubenswrapper[4897]: I0228 13:48:24.943808 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" event={"ID":"368bd0f8-b828-44ed-a605-3aabab81c9c1","Type":"ContainerDied","Data":"e24ddc4064640dd3188df9b322e6f086577574068a126bd0fe16474d36c3499d"} Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.080250 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9lkmb"] Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.090503 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9lkmb"] Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.435052 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.463052 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-inventory\") pod \"368bd0f8-b828-44ed-a605-3aabab81c9c1\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.463570 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v72w9\" (UniqueName: \"kubernetes.io/projected/368bd0f8-b828-44ed-a605-3aabab81c9c1-kube-api-access-v72w9\") pod \"368bd0f8-b828-44ed-a605-3aabab81c9c1\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.463619 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-ssh-key-openstack-edpm-ipam\") pod \"368bd0f8-b828-44ed-a605-3aabab81c9c1\" (UID: \"368bd0f8-b828-44ed-a605-3aabab81c9c1\") " Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.474136 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368bd0f8-b828-44ed-a605-3aabab81c9c1-kube-api-access-v72w9" (OuterVolumeSpecName: "kube-api-access-v72w9") pod "368bd0f8-b828-44ed-a605-3aabab81c9c1" (UID: "368bd0f8-b828-44ed-a605-3aabab81c9c1"). InnerVolumeSpecName "kube-api-access-v72w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.514522 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "368bd0f8-b828-44ed-a605-3aabab81c9c1" (UID: "368bd0f8-b828-44ed-a605-3aabab81c9c1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.527949 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb2bcb4-6f6f-4a44-813d-d5e2e2597265" path="/var/lib/kubelet/pods/8eb2bcb4-6f6f-4a44-813d-d5e2e2597265/volumes" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.535396 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-inventory" (OuterVolumeSpecName: "inventory") pod "368bd0f8-b828-44ed-a605-3aabab81c9c1" (UID: "368bd0f8-b828-44ed-a605-3aabab81c9c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.566025 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v72w9\" (UniqueName: \"kubernetes.io/projected/368bd0f8-b828-44ed-a605-3aabab81c9c1-kube-api-access-v72w9\") on node \"crc\" DevicePath \"\"" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.566206 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.566265 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/368bd0f8-b828-44ed-a605-3aabab81c9c1-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.967444 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" event={"ID":"368bd0f8-b828-44ed-a605-3aabab81c9c1","Type":"ContainerDied","Data":"d77c2bcba17e239472122b32d0b30643d99f0b3fc2440de5d9f01e5ced403df2"} Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.967487 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d77c2bcba17e239472122b32d0b30643d99f0b3fc2440de5d9f01e5ced403df2" Feb 28 13:48:26 crc kubenswrapper[4897]: I0228 13:48:26.967505 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t9clw" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.130253 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb"] Feb 28 13:48:27 crc kubenswrapper[4897]: E0228 13:48:27.131052 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368bd0f8-b828-44ed-a605-3aabab81c9c1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.131071 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="368bd0f8-b828-44ed-a605-3aabab81c9c1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.131302 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="368bd0f8-b828-44ed-a605-3aabab81c9c1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.131999 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.137883 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.147015 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb"] Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.147214 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.147977 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.148064 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.186933 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.187009 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g77c\" (UniqueName: \"kubernetes.io/projected/efd25e11-574a-4504-94fc-509e4f367939-kube-api-access-5g77c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.187200 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.187225 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.288985 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.289041 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g77c\" (UniqueName: \"kubernetes.io/projected/efd25e11-574a-4504-94fc-509e4f367939-kube-api-access-5g77c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.289136 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.289158 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.294773 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.295055 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.295254 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.309049 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g77c\" (UniqueName: \"kubernetes.io/projected/efd25e11-574a-4504-94fc-509e4f367939-kube-api-access-5g77c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:27 crc kubenswrapper[4897]: I0228 13:48:27.476812 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:48:28 crc kubenswrapper[4897]: I0228 13:48:28.030722 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb"] Feb 28 13:48:28 crc kubenswrapper[4897]: W0228 13:48:28.048658 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd25e11_574a_4504_94fc_509e4f367939.slice/crio-e1ff669f5f3f5e0746764a9d6cec59a63070836bd00dcf75467fb71f8be3d2b8 WatchSource:0}: Error finding container e1ff669f5f3f5e0746764a9d6cec59a63070836bd00dcf75467fb71f8be3d2b8: Status 404 returned error can't find the container with id e1ff669f5f3f5e0746764a9d6cec59a63070836bd00dcf75467fb71f8be3d2b8 Feb 28 13:48:28 crc kubenswrapper[4897]: I0228 13:48:28.996602 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" event={"ID":"efd25e11-574a-4504-94fc-509e4f367939","Type":"ContainerStarted","Data":"f1fbd660f9fe89575c8eb821bc8d2a272d6cf6a64ceb1f8fc52e5a9708e3c505"} Feb 28 13:48:28 crc kubenswrapper[4897]: I0228 13:48:28.997287 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" event={"ID":"efd25e11-574a-4504-94fc-509e4f367939","Type":"ContainerStarted","Data":"e1ff669f5f3f5e0746764a9d6cec59a63070836bd00dcf75467fb71f8be3d2b8"} Feb 28 13:48:29 crc kubenswrapper[4897]: I0228 13:48:29.019321 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" podStartSLOduration=1.485845243 podStartE2EDuration="2.019287297s" podCreationTimestamp="2026-02-28 13:48:27 +0000 UTC" firstStartedPulling="2026-02-28 13:48:28.053231579 +0000 UTC m=+1922.295552286" lastFinishedPulling="2026-02-28 13:48:28.586673673 +0000 UTC m=+1922.828994340" observedRunningTime="2026-02-28 13:48:29.016086735 +0000 UTC m=+1923.258407402" watchObservedRunningTime="2026-02-28 13:48:29.019287297 +0000 UTC m=+1923.261607954" Feb 28 13:48:30 crc kubenswrapper[4897]: I0228 13:48:30.028764 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7vjm5"] Feb 28 13:48:30 crc kubenswrapper[4897]: I0228 13:48:30.037640 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7vjm5"] Feb 28 13:48:30 crc kubenswrapper[4897]: E0228 13:48:30.465105 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:48:30 crc kubenswrapper[4897]: I0228 13:48:30.473694 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fc315f1-a65d-4ba7-aa89-69ffe04b53a6" path="/var/lib/kubelet/pods/5fc315f1-a65d-4ba7-aa89-69ffe04b53a6/volumes" Feb 28 13:48:33 crc kubenswrapper[4897]: I0228 13:48:33.075419 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fgtj6"] Feb 28 13:48:33 crc kubenswrapper[4897]: I0228 13:48:33.091029 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fgtj6"] Feb 28 13:48:33 crc kubenswrapper[4897]: I0228 13:48:33.217037 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 13:48:34 crc kubenswrapper[4897]: I0228 13:48:34.466917 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661850a9-a877-476b-b3ae-a6c6f3b3676a" path="/var/lib/kubelet/pods/661850a9-a877-476b-b3ae-a6c6f3b3676a/volumes" Feb 28 13:48:41 crc kubenswrapper[4897]: E0228 13:48:41.459537 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:48:43 crc kubenswrapper[4897]: I0228 13:48:43.040769 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-h59fj"] Feb 28 13:48:43 crc kubenswrapper[4897]: I0228 13:48:43.056655 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-h59fj"] Feb 28 13:48:44 crc kubenswrapper[4897]: I0228 13:48:44.409194 4897 scope.go:117] "RemoveContainer" containerID="1f349f8a5bfb5ee92815ca4f6ca7875636abfa533848da899aafd240971f601a" Feb 28 13:48:44 crc kubenswrapper[4897]: I0228 13:48:44.471654 4897 scope.go:117] "RemoveContainer" containerID="acd013fc13fa55135de1a45d2aa5c536b91a97bb3bd8e14bb174f4c0bebf8c6e" Feb 28 13:48:44 crc kubenswrapper[4897]: I0228 13:48:44.473450 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9edcf1-516a-46a6-a77b-5061505a58d7" path="/var/lib/kubelet/pods/bd9edcf1-516a-46a6-a77b-5061505a58d7/volumes" Feb 28 13:48:44 crc kubenswrapper[4897]: I0228 13:48:44.532715 4897 scope.go:117] "RemoveContainer" containerID="35a3970f5e13a727a746265694f0710ca8239257e87518d2964beb9c3efddce0" Feb 28 13:48:44 crc kubenswrapper[4897]: I0228 13:48:44.574829 4897 scope.go:117] "RemoveContainer" containerID="9bc5205a83a60702942ea03fd3eb5c1cbeb80fe2a535067733da00a4a5792087" Feb 28 13:48:44 crc kubenswrapper[4897]: I0228 13:48:44.640860 4897 scope.go:117] "RemoveContainer" containerID="055a592e8d99d6446218e3f7bed61affb829a0dca995b9cbcfc03dbe444b4339" Feb 28 13:48:44 crc kubenswrapper[4897]: I0228 13:48:44.672343 4897 scope.go:117] "RemoveContainer" containerID="316f4ff8d86c10b2247ca63060e60068ea860f8805bf6b8a025f41581a628fb2" Feb 28 13:48:52 crc kubenswrapper[4897]: E0228 13:48:52.459933 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:49:07 crc kubenswrapper[4897]: E0228 13:49:07.459986 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.086975 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-f8445"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.104512 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lfprr"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.114730 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-f8445"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.125176 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d87b-account-create-update-znc9q"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.138173 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lfprr"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.146877 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c071-account-create-update-4cxfk"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.155754 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-297f-account-create-update-2g97b"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.164604 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4v8r2"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.172868 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d87b-account-create-update-znc9q"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.181333 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c071-account-create-update-4cxfk"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.190089 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-297f-account-create-update-2g97b"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.199909 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4v8r2"] Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.466289 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9" path="/var/lib/kubelet/pods/3e3334c1-0b6c-4ea0-8c8c-1c5652d64af9/volumes" Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.467074 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f458e2-efe0-49ba-8fa3-135d3673b9a7" path="/var/lib/kubelet/pods/50f458e2-efe0-49ba-8fa3-135d3673b9a7/volumes" Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.467941 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550d213b-7a35-4053-8364-e78d03f794ca" path="/var/lib/kubelet/pods/550d213b-7a35-4053-8364-e78d03f794ca/volumes" Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.468563 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4a08db-9a69-41de-b77c-5ebeb255cd29" path="/var/lib/kubelet/pods/8c4a08db-9a69-41de-b77c-5ebeb255cd29/volumes" Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.469616 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36fdddb-f718-4f7a-bc78-1a5a543fdefe" path="/var/lib/kubelet/pods/f36fdddb-f718-4f7a-bc78-1a5a543fdefe/volumes" Feb 28 13:49:18 crc kubenswrapper[4897]: I0228 13:49:18.470191 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4a0d0d-f8d7-42b2-983a-44af7086a43d" path="/var/lib/kubelet/pods/ff4a0d0d-f8d7-42b2-983a-44af7086a43d/volumes" Feb 28 13:49:20 crc kubenswrapper[4897]: E0228 13:49:20.461333 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:49:34 crc kubenswrapper[4897]: E0228 13:49:34.460007 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:49:44 crc kubenswrapper[4897]: I0228 13:49:44.869836 4897 scope.go:117] "RemoveContainer" containerID="69662651062e32e83408d45b84ef1fa8d88485b2f7df9019a18bc7b50c700294" Feb 28 13:49:44 crc kubenswrapper[4897]: I0228 13:49:44.917727 4897 scope.go:117] "RemoveContainer" containerID="5bb15e400aa68a527c5bf52ef1421e2ffa7853c527c7de11d4cec78fd9e45ec3" Feb 28 13:49:44 crc kubenswrapper[4897]: I0228 13:49:44.966808 4897 scope.go:117] "RemoveContainer" containerID="1e48acf4c11a6fa472f80eba7c2a0adcb921649925da1164b44cf6f65b3216a4" Feb 28 13:49:45 crc kubenswrapper[4897]: I0228 13:49:45.013045 4897 scope.go:117] "RemoveContainer" containerID="75da0ea2dbd7506580b4f9c5e9a6b64ad7c1ce1a1d240ce592f6071082da0c6b" Feb 28 13:49:45 crc kubenswrapper[4897]: I0228 13:49:45.058529 4897 scope.go:117] "RemoveContainer" containerID="eb63a77e85e634d0f90885421e8ef12e03a0c23d35c1ba10768f6c38592cb5c9" Feb 28 13:49:45 crc kubenswrapper[4897]: I0228 13:49:45.100501 4897 scope.go:117] "RemoveContainer" containerID="1a548167e1018e7b52260eaf95f474b68e5f3775b49c5ca818a40bc79eefc798" Feb 28 13:49:47 crc kubenswrapper[4897]: E0228 13:49:47.461818 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:49:52 crc kubenswrapper[4897]: I0228 13:49:52.061690 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mmkk5"] Feb 28 13:49:52 crc kubenswrapper[4897]: I0228 13:49:52.076041 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mmkk5"] Feb 28 13:49:52 crc kubenswrapper[4897]: I0228 13:49:52.467385 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b3f8ac-cae7-400a-bdf5-5768e5b74f79" path="/var/lib/kubelet/pods/07b3f8ac-cae7-400a-bdf5-5768e5b74f79/volumes" Feb 28 13:49:59 crc kubenswrapper[4897]: E0228 13:49:59.458424 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.150907 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538110-4xsfk"] Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.152527 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.155522 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.156394 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.156493 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.169163 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538110-4xsfk"] Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.294600 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzdr\" (UniqueName: \"kubernetes.io/projected/e4ea4b91-46bf-4b38-a2e1-7370d12072ca-kube-api-access-ltzdr\") pod \"auto-csr-approver-29538110-4xsfk\" (UID: \"e4ea4b91-46bf-4b38-a2e1-7370d12072ca\") " pod="openshift-infra/auto-csr-approver-29538110-4xsfk" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.397600 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltzdr\" (UniqueName: \"kubernetes.io/projected/e4ea4b91-46bf-4b38-a2e1-7370d12072ca-kube-api-access-ltzdr\") pod \"auto-csr-approver-29538110-4xsfk\" (UID: \"e4ea4b91-46bf-4b38-a2e1-7370d12072ca\") " pod="openshift-infra/auto-csr-approver-29538110-4xsfk" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.459293 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltzdr\" (UniqueName: \"kubernetes.io/projected/e4ea4b91-46bf-4b38-a2e1-7370d12072ca-kube-api-access-ltzdr\") pod \"auto-csr-approver-29538110-4xsfk\" (UID: \"e4ea4b91-46bf-4b38-a2e1-7370d12072ca\") " pod="openshift-infra/auto-csr-approver-29538110-4xsfk" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.486862 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" Feb 28 13:50:00 crc kubenswrapper[4897]: I0228 13:50:00.991370 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538110-4xsfk"] Feb 28 13:50:01 crc kubenswrapper[4897]: I0228 13:50:01.386464 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" event={"ID":"e4ea4b91-46bf-4b38-a2e1-7370d12072ca","Type":"ContainerStarted","Data":"10e2697cad354395056338817ddf46c43b11e1b8dc0916c9e1531b03c84edc02"} Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.399922 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" event={"ID":"e4ea4b91-46bf-4b38-a2e1-7370d12072ca","Type":"ContainerStarted","Data":"bf5deaf29ca942d71c86128e4091cb6b0a1aadcce2842f4c1233af575b9b2323"} Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.420204 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" podStartSLOduration=1.483302707 podStartE2EDuration="2.420183048s" podCreationTimestamp="2026-02-28 13:50:00 +0000 UTC" firstStartedPulling="2026-02-28 13:50:00.996649583 +0000 UTC m=+2015.238970240" lastFinishedPulling="2026-02-28 13:50:01.933529924 +0000 UTC m=+2016.175850581" observedRunningTime="2026-02-28 13:50:02.417563172 +0000 UTC m=+2016.659883839" watchObservedRunningTime="2026-02-28 13:50:02.420183048 +0000 UTC m=+2016.662503725" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.520383 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qclb8"] Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.524160 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.532254 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qclb8"] Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.645418 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bt9c\" (UniqueName: \"kubernetes.io/projected/b48bbd3f-759f-4178-89c6-60d4f427e265-kube-api-access-6bt9c\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.645474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-catalog-content\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.645593 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-utilities\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.747288 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bt9c\" (UniqueName: \"kubernetes.io/projected/b48bbd3f-759f-4178-89c6-60d4f427e265-kube-api-access-6bt9c\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.747384 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-catalog-content\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.747497 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-utilities\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.748149 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-utilities\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.748276 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-catalog-content\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.767807 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bt9c\" (UniqueName: \"kubernetes.io/projected/b48bbd3f-759f-4178-89c6-60d4f427e265-kube-api-access-6bt9c\") pod \"redhat-operators-qclb8\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:02 crc kubenswrapper[4897]: I0228 13:50:02.846608 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:03 crc kubenswrapper[4897]: I0228 13:50:03.342704 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qclb8"] Feb 28 13:50:03 crc kubenswrapper[4897]: I0228 13:50:03.422815 4897 generic.go:334] "Generic (PLEG): container finished" podID="e4ea4b91-46bf-4b38-a2e1-7370d12072ca" containerID="bf5deaf29ca942d71c86128e4091cb6b0a1aadcce2842f4c1233af575b9b2323" exitCode=0 Feb 28 13:50:03 crc kubenswrapper[4897]: I0228 13:50:03.422915 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" event={"ID":"e4ea4b91-46bf-4b38-a2e1-7370d12072ca","Type":"ContainerDied","Data":"bf5deaf29ca942d71c86128e4091cb6b0a1aadcce2842f4c1233af575b9b2323"} Feb 28 13:50:03 crc kubenswrapper[4897]: I0228 13:50:03.425170 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qclb8" event={"ID":"b48bbd3f-759f-4178-89c6-60d4f427e265","Type":"ContainerStarted","Data":"86b28722177110458ee5ec921564e14c1ca54472feedda072e1cd076eaecaa73"} Feb 28 13:50:04 crc kubenswrapper[4897]: I0228 13:50:04.440531 4897 generic.go:334] "Generic (PLEG): container finished" podID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerID="3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974" exitCode=0 Feb 28 13:50:04 crc kubenswrapper[4897]: I0228 13:50:04.440596 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qclb8" event={"ID":"b48bbd3f-759f-4178-89c6-60d4f427e265","Type":"ContainerDied","Data":"3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974"} Feb 28 13:50:04 crc kubenswrapper[4897]: I0228 13:50:04.820364 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" Feb 28 13:50:05 crc kubenswrapper[4897]: I0228 13:50:05.020270 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltzdr\" (UniqueName: \"kubernetes.io/projected/e4ea4b91-46bf-4b38-a2e1-7370d12072ca-kube-api-access-ltzdr\") pod \"e4ea4b91-46bf-4b38-a2e1-7370d12072ca\" (UID: \"e4ea4b91-46bf-4b38-a2e1-7370d12072ca\") " Feb 28 13:50:05 crc kubenswrapper[4897]: I0228 13:50:05.028555 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ea4b91-46bf-4b38-a2e1-7370d12072ca-kube-api-access-ltzdr" (OuterVolumeSpecName: "kube-api-access-ltzdr") pod "e4ea4b91-46bf-4b38-a2e1-7370d12072ca" (UID: "e4ea4b91-46bf-4b38-a2e1-7370d12072ca"). InnerVolumeSpecName "kube-api-access-ltzdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:50:05 crc kubenswrapper[4897]: I0228 13:50:05.122952 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltzdr\" (UniqueName: \"kubernetes.io/projected/e4ea4b91-46bf-4b38-a2e1-7370d12072ca-kube-api-access-ltzdr\") on node \"crc\" DevicePath \"\"" Feb 28 13:50:05 crc kubenswrapper[4897]: I0228 13:50:05.453477 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" event={"ID":"e4ea4b91-46bf-4b38-a2e1-7370d12072ca","Type":"ContainerDied","Data":"10e2697cad354395056338817ddf46c43b11e1b8dc0916c9e1531b03c84edc02"} Feb 28 13:50:05 crc kubenswrapper[4897]: I0228 13:50:05.453750 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e2697cad354395056338817ddf46c43b11e1b8dc0916c9e1531b03c84edc02" Feb 28 13:50:05 crc kubenswrapper[4897]: I0228 13:50:05.453759 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538110-4xsfk" Feb 28 13:50:05 crc kubenswrapper[4897]: I0228 13:50:05.478326 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538104-knc25"] Feb 28 13:50:05 crc kubenswrapper[4897]: I0228 13:50:05.487924 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538104-knc25"] Feb 28 13:50:06 crc kubenswrapper[4897]: I0228 13:50:06.486536 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68beed07-352c-43c3-9750-f5e63fcab99a" path="/var/lib/kubelet/pods/68beed07-352c-43c3-9750-f5e63fcab99a/volumes" Feb 28 13:50:06 crc kubenswrapper[4897]: I0228 13:50:06.487699 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qclb8" event={"ID":"b48bbd3f-759f-4178-89c6-60d4f427e265","Type":"ContainerStarted","Data":"4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931"} Feb 28 13:50:10 crc kubenswrapper[4897]: E0228 13:50:10.458925 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:50:11 crc kubenswrapper[4897]: I0228 13:50:11.526634 4897 generic.go:334] "Generic (PLEG): container finished" podID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerID="4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931" exitCode=0 Feb 28 13:50:11 crc kubenswrapper[4897]: I0228 13:50:11.526697 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qclb8" event={"ID":"b48bbd3f-759f-4178-89c6-60d4f427e265","Type":"ContainerDied","Data":"4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931"} Feb 28 13:50:12 crc kubenswrapper[4897]: I0228 13:50:12.545975 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qclb8" event={"ID":"b48bbd3f-759f-4178-89c6-60d4f427e265","Type":"ContainerStarted","Data":"66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c"} Feb 28 13:50:12 crc kubenswrapper[4897]: I0228 13:50:12.585881 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qclb8" podStartSLOduration=3.071890427 podStartE2EDuration="10.585849894s" podCreationTimestamp="2026-02-28 13:50:02 +0000 UTC" firstStartedPulling="2026-02-28 13:50:04.44357503 +0000 UTC m=+2018.685895727" lastFinishedPulling="2026-02-28 13:50:11.957534537 +0000 UTC m=+2026.199855194" observedRunningTime="2026-02-28 13:50:12.580103119 +0000 UTC m=+2026.822423806" watchObservedRunningTime="2026-02-28 13:50:12.585849894 +0000 UTC m=+2026.828170591" Feb 28 13:50:12 crc kubenswrapper[4897]: I0228 13:50:12.847709 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:12 crc kubenswrapper[4897]: I0228 13:50:12.847789 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:13 crc kubenswrapper[4897]: I0228 13:50:13.919675 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qclb8" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="registry-server" probeResult="failure" output=< Feb 28 13:50:13 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:50:13 crc kubenswrapper[4897]: > Feb 28 13:50:16 crc kubenswrapper[4897]: I0228 13:50:16.055191 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gbcft"] Feb 28 13:50:16 crc kubenswrapper[4897]: I0228 13:50:16.067938 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gbcft"] Feb 28 13:50:16 crc kubenswrapper[4897]: I0228 13:50:16.479034 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983b1a77-ab11-41df-b954-a8726742f9e5" path="/var/lib/kubelet/pods/983b1a77-ab11-41df-b954-a8726742f9e5/volumes" Feb 28 13:50:19 crc kubenswrapper[4897]: I0228 13:50:19.051591 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rctbp"] Feb 28 13:50:19 crc kubenswrapper[4897]: I0228 13:50:19.076458 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rctbp"] Feb 28 13:50:20 crc kubenswrapper[4897]: I0228 13:50:20.475237 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c685d21-3cda-45f7-8486-5bb236b5eb43" path="/var/lib/kubelet/pods/8c685d21-3cda-45f7-8486-5bb236b5eb43/volumes" Feb 28 13:50:22 crc kubenswrapper[4897]: E0228 13:50:22.459793 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:50:23 crc kubenswrapper[4897]: I0228 13:50:23.901782 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qclb8" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="registry-server" probeResult="failure" output=< Feb 28 13:50:23 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 13:50:23 crc kubenswrapper[4897]: > Feb 28 13:50:32 crc kubenswrapper[4897]: I0228 13:50:32.898780 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:32 crc kubenswrapper[4897]: I0228 13:50:32.959053 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:33 crc kubenswrapper[4897]: I0228 13:50:33.142639 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qclb8"] Feb 28 13:50:33 crc kubenswrapper[4897]: I0228 13:50:33.371068 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:50:33 crc kubenswrapper[4897]: I0228 13:50:33.371172 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:50:34 crc kubenswrapper[4897]: I0228 13:50:34.786004 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qclb8" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="registry-server" containerID="cri-o://66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c" gracePeriod=2 Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.287427 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.435611 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bt9c\" (UniqueName: \"kubernetes.io/projected/b48bbd3f-759f-4178-89c6-60d4f427e265-kube-api-access-6bt9c\") pod \"b48bbd3f-759f-4178-89c6-60d4f427e265\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.435789 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-utilities\") pod \"b48bbd3f-759f-4178-89c6-60d4f427e265\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.436256 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-catalog-content\") pod \"b48bbd3f-759f-4178-89c6-60d4f427e265\" (UID: \"b48bbd3f-759f-4178-89c6-60d4f427e265\") " Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.437125 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-utilities" (OuterVolumeSpecName: "utilities") pod "b48bbd3f-759f-4178-89c6-60d4f427e265" (UID: "b48bbd3f-759f-4178-89c6-60d4f427e265"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.437557 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.444039 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48bbd3f-759f-4178-89c6-60d4f427e265-kube-api-access-6bt9c" (OuterVolumeSpecName: "kube-api-access-6bt9c") pod "b48bbd3f-759f-4178-89c6-60d4f427e265" (UID: "b48bbd3f-759f-4178-89c6-60d4f427e265"). InnerVolumeSpecName "kube-api-access-6bt9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.540291 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bt9c\" (UniqueName: \"kubernetes.io/projected/b48bbd3f-759f-4178-89c6-60d4f427e265-kube-api-access-6bt9c\") on node \"crc\" DevicePath \"\"" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.616911 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b48bbd3f-759f-4178-89c6-60d4f427e265" (UID: "b48bbd3f-759f-4178-89c6-60d4f427e265"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.643617 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48bbd3f-759f-4178-89c6-60d4f427e265-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.800256 4897 generic.go:334] "Generic (PLEG): container finished" podID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerID="66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c" exitCode=0 Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.800365 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qclb8" event={"ID":"b48bbd3f-759f-4178-89c6-60d4f427e265","Type":"ContainerDied","Data":"66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c"} Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.800403 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qclb8" event={"ID":"b48bbd3f-759f-4178-89c6-60d4f427e265","Type":"ContainerDied","Data":"86b28722177110458ee5ec921564e14c1ca54472feedda072e1cd076eaecaa73"} Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.800430 4897 scope.go:117] "RemoveContainer" containerID="66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.800371 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qclb8" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.828645 4897 scope.go:117] "RemoveContainer" containerID="4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.837293 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qclb8"] Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.848164 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qclb8"] Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.871074 4897 scope.go:117] "RemoveContainer" containerID="3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.914296 4897 scope.go:117] "RemoveContainer" containerID="66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c" Feb 28 13:50:35 crc kubenswrapper[4897]: E0228 13:50:35.914841 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c\": container with ID starting with 66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c not found: ID does not exist" containerID="66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.914900 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c"} err="failed to get container status \"66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c\": rpc error: code = NotFound desc = could not find container \"66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c\": container with ID starting with 66ea4c44240ef2009f09579ea0a32c073186daeb16644a3a8869e2690a172c8c not found: ID does not exist" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.914939 4897 scope.go:117] "RemoveContainer" containerID="4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931" Feb 28 13:50:35 crc kubenswrapper[4897]: E0228 13:50:35.915415 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931\": container with ID starting with 4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931 not found: ID does not exist" containerID="4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.915447 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931"} err="failed to get container status \"4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931\": rpc error: code = NotFound desc = could not find container \"4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931\": container with ID starting with 4d394efb3817e337da4fb0ec247df8d1d4a5a70f751604a4221288d6a9497931 not found: ID does not exist" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.915467 4897 scope.go:117] "RemoveContainer" containerID="3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974" Feb 28 13:50:35 crc kubenswrapper[4897]: E0228 13:50:35.915886 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974\": container with ID starting with 3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974 not found: ID does not exist" containerID="3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974" Feb 28 13:50:35 crc kubenswrapper[4897]: I0228 13:50:35.915929 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974"} err="failed to get container status \"3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974\": rpc error: code = NotFound desc = could not find container \"3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974\": container with ID starting with 3f5867cbb779678a47e854325d91bc08f52b47f20ae23adac7a625974de01974 not found: ID does not exist" Feb 28 13:50:36 crc kubenswrapper[4897]: E0228 13:50:36.463234 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:50:36 crc kubenswrapper[4897]: I0228 13:50:36.477386 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" path="/var/lib/kubelet/pods/b48bbd3f-759f-4178-89c6-60d4f427e265/volumes" Feb 28 13:50:45 crc kubenswrapper[4897]: I0228 13:50:45.284343 4897 scope.go:117] "RemoveContainer" containerID="3470dcefe4ba195eb4155138877e8e98329abed179bb5261587f606a44b8755f" Feb 28 13:50:45 crc kubenswrapper[4897]: I0228 13:50:45.369328 4897 scope.go:117] "RemoveContainer" containerID="e4619fbec221700042098cfae05b995c1e6b171efee5d910f73d2a991a6b2e2f" Feb 28 13:50:45 crc kubenswrapper[4897]: I0228 13:50:45.467829 4897 scope.go:117] "RemoveContainer" containerID="9184d269b372b09e0171b692c1bf6fcf54eaec01988d7d16e77be0f91d908227" Feb 28 13:50:45 crc kubenswrapper[4897]: I0228 13:50:45.520475 4897 scope.go:117] "RemoveContainer" containerID="9065bcb0878b4ce3fb9d42c6f4b45270a042dbbc06d0b14c1406475a734ab4ba" Feb 28 13:50:48 crc kubenswrapper[4897]: E0228 13:50:48.459359 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:51:01 crc kubenswrapper[4897]: I0228 13:51:01.057113 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-g6qqk"] Feb 28 13:51:01 crc kubenswrapper[4897]: I0228 13:51:01.067964 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-g6qqk"] Feb 28 13:51:01 crc kubenswrapper[4897]: E0228 13:51:01.458868 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:51:02 crc kubenswrapper[4897]: I0228 13:51:02.478721 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ccafbfb-14c3-4f61-8fb4-adf29f725d61" path="/var/lib/kubelet/pods/0ccafbfb-14c3-4f61-8fb4-adf29f725d61/volumes" Feb 28 13:51:03 crc kubenswrapper[4897]: I0228 13:51:03.370719 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:51:03 crc kubenswrapper[4897]: I0228 13:51:03.371119 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:51:13 crc kubenswrapper[4897]: E0228 13:51:13.458899 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" Feb 28 13:51:26 crc kubenswrapper[4897]: I0228 13:51:26.472366 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 13:51:27 crc kubenswrapper[4897]: I0228 13:51:27.433849 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29kqk" event={"ID":"dbe86f80-68e4-4170-8801-cea07c362d5c","Type":"ContainerStarted","Data":"c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c"} Feb 28 13:51:28 crc kubenswrapper[4897]: I0228 13:51:28.444283 4897 generic.go:334] "Generic (PLEG): container finished" podID="efd25e11-574a-4504-94fc-509e4f367939" containerID="f1fbd660f9fe89575c8eb821bc8d2a272d6cf6a64ceb1f8fc52e5a9708e3c505" exitCode=0 Feb 28 13:51:28 crc kubenswrapper[4897]: I0228 13:51:28.444364 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" event={"ID":"efd25e11-574a-4504-94fc-509e4f367939","Type":"ContainerDied","Data":"f1fbd660f9fe89575c8eb821bc8d2a272d6cf6a64ceb1f8fc52e5a9708e3c505"} Feb 28 13:51:29 crc kubenswrapper[4897]: I0228 13:51:29.459225 4897 generic.go:334] "Generic (PLEG): container finished" podID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerID="c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c" exitCode=0 Feb 28 13:51:29 crc kubenswrapper[4897]: I0228 13:51:29.459324 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29kqk" event={"ID":"dbe86f80-68e4-4170-8801-cea07c362d5c","Type":"ContainerDied","Data":"c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c"} Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.039423 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.128361 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-bootstrap-combined-ca-bundle\") pod \"efd25e11-574a-4504-94fc-509e4f367939\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.128545 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-inventory\") pod \"efd25e11-574a-4504-94fc-509e4f367939\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.128615 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-ssh-key-openstack-edpm-ipam\") pod \"efd25e11-574a-4504-94fc-509e4f367939\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.128680 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g77c\" (UniqueName: \"kubernetes.io/projected/efd25e11-574a-4504-94fc-509e4f367939-kube-api-access-5g77c\") pod \"efd25e11-574a-4504-94fc-509e4f367939\" (UID: \"efd25e11-574a-4504-94fc-509e4f367939\") " Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.136503 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "efd25e11-574a-4504-94fc-509e4f367939" (UID: "efd25e11-574a-4504-94fc-509e4f367939"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.149518 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd25e11-574a-4504-94fc-509e4f367939-kube-api-access-5g77c" (OuterVolumeSpecName: "kube-api-access-5g77c") pod "efd25e11-574a-4504-94fc-509e4f367939" (UID: "efd25e11-574a-4504-94fc-509e4f367939"). InnerVolumeSpecName "kube-api-access-5g77c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.175242 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-inventory" (OuterVolumeSpecName: "inventory") pod "efd25e11-574a-4504-94fc-509e4f367939" (UID: "efd25e11-574a-4504-94fc-509e4f367939"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.197727 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "efd25e11-574a-4504-94fc-509e4f367939" (UID: "efd25e11-574a-4504-94fc-509e4f367939"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.231240 4897 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.231285 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.231301 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd25e11-574a-4504-94fc-509e4f367939-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.231335 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g77c\" (UniqueName: \"kubernetes.io/projected/efd25e11-574a-4504-94fc-509e4f367939-kube-api-access-5g77c\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.472352 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.472367 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb" event={"ID":"efd25e11-574a-4504-94fc-509e4f367939","Type":"ContainerDied","Data":"e1ff669f5f3f5e0746764a9d6cec59a63070836bd00dcf75467fb71f8be3d2b8"} Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.472457 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ff669f5f3f5e0746764a9d6cec59a63070836bd00dcf75467fb71f8be3d2b8" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.475288 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29kqk" event={"ID":"dbe86f80-68e4-4170-8801-cea07c362d5c","Type":"ContainerStarted","Data":"d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2"} Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.530573 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-29kqk" podStartSLOduration=2.890254717 podStartE2EDuration="18m6.530546794s" podCreationTimestamp="2026-02-28 13:33:24 +0000 UTC" firstStartedPulling="2026-02-28 13:33:26.223981658 +0000 UTC m=+1020.466302325" lastFinishedPulling="2026-02-28 13:51:29.864273745 +0000 UTC m=+2104.106594402" observedRunningTime="2026-02-28 13:51:30.509476268 +0000 UTC m=+2104.751796955" watchObservedRunningTime="2026-02-28 13:51:30.530546794 +0000 UTC m=+2104.772867461" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.583378 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf"] Feb 28 13:51:30 crc kubenswrapper[4897]: E0228 13:51:30.583792 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd25e11-574a-4504-94fc-509e4f367939" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.583815 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd25e11-574a-4504-94fc-509e4f367939" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 13:51:30 crc kubenswrapper[4897]: E0228 13:51:30.583850 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="extract-content" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.583859 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="extract-content" Feb 28 13:51:30 crc kubenswrapper[4897]: E0228 13:51:30.583871 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="extract-utilities" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.583878 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="extract-utilities" Feb 28 13:51:30 crc kubenswrapper[4897]: E0228 13:51:30.583898 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ea4b91-46bf-4b38-a2e1-7370d12072ca" containerName="oc" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.583906 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ea4b91-46bf-4b38-a2e1-7370d12072ca" containerName="oc" Feb 28 13:51:30 crc kubenswrapper[4897]: E0228 13:51:30.583922 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="registry-server" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.583931 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="registry-server" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.584153 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd25e11-574a-4504-94fc-509e4f367939" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.584183 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48bbd3f-759f-4178-89c6-60d4f427e265" containerName="registry-server" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.584202 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ea4b91-46bf-4b38-a2e1-7370d12072ca" containerName="oc" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.587211 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.593411 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.593621 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.593774 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.593839 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.607805 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf"] Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.639421 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.639474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.639676 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcdpb\" (UniqueName: \"kubernetes.io/projected/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-kube-api-access-tcdpb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.741709 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcdpb\" (UniqueName: \"kubernetes.io/projected/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-kube-api-access-tcdpb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.741833 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.741866 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.748252 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.748452 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.765553 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcdpb\" (UniqueName: \"kubernetes.io/projected/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-kube-api-access-tcdpb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-x2blf\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:30 crc kubenswrapper[4897]: I0228 13:51:30.934002 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:51:31 crc kubenswrapper[4897]: I0228 13:51:31.535391 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf"] Feb 28 13:51:32 crc kubenswrapper[4897]: I0228 13:51:32.495472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" event={"ID":"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa","Type":"ContainerStarted","Data":"0b65f09d8ff6e0b9288e657c847b06d4f79ff37b69981cf870bca6795c08521d"} Feb 28 13:51:32 crc kubenswrapper[4897]: I0228 13:51:32.496106 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" event={"ID":"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa","Type":"ContainerStarted","Data":"14ad6181f029891f64ee01550d33fd0f70c96aeb8bf408cda5a61b527e444ecd"} Feb 28 13:51:32 crc kubenswrapper[4897]: I0228 13:51:32.527382 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" podStartSLOduration=1.9027921330000002 podStartE2EDuration="2.52726064s" podCreationTimestamp="2026-02-28 13:51:30 +0000 UTC" firstStartedPulling="2026-02-28 13:51:31.537764657 +0000 UTC m=+2105.780085334" lastFinishedPulling="2026-02-28 13:51:32.162233174 +0000 UTC m=+2106.404553841" observedRunningTime="2026-02-28 13:51:32.517739136 +0000 UTC m=+2106.760059803" watchObservedRunningTime="2026-02-28 13:51:32.52726064 +0000 UTC m=+2106.769581347" Feb 28 13:51:33 crc kubenswrapper[4897]: I0228 13:51:33.370574 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:51:33 crc kubenswrapper[4897]: I0228 13:51:33.370641 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:51:33 crc kubenswrapper[4897]: I0228 13:51:33.370684 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:51:33 crc kubenswrapper[4897]: I0228 13:51:33.371431 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68cbc528fc9ee62676935060f8ad57ccdbb15ff6bc6647175367c2eeaa5ffc16"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:51:33 crc kubenswrapper[4897]: I0228 13:51:33.371486 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://68cbc528fc9ee62676935060f8ad57ccdbb15ff6bc6647175367c2eeaa5ffc16" gracePeriod=600 Feb 28 13:51:33 crc kubenswrapper[4897]: I0228 13:51:33.510988 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="68cbc528fc9ee62676935060f8ad57ccdbb15ff6bc6647175367c2eeaa5ffc16" exitCode=0 Feb 28 13:51:33 crc kubenswrapper[4897]: I0228 13:51:33.511053 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"68cbc528fc9ee62676935060f8ad57ccdbb15ff6bc6647175367c2eeaa5ffc16"} Feb 28 13:51:33 crc kubenswrapper[4897]: I0228 13:51:33.511623 4897 scope.go:117] "RemoveContainer" containerID="aac243b40f8cd522c66dee8379dbaaf158caf96690e017e4b821fa9af1817d1c" Feb 28 13:51:34 crc kubenswrapper[4897]: I0228 13:51:34.123534 4897 kubelet.go:1505] "Image garbage collection succeeded" Feb 28 13:51:34 crc kubenswrapper[4897]: I0228 13:51:34.524693 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e"} Feb 28 13:51:34 crc kubenswrapper[4897]: I0228 13:51:34.927503 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:51:34 crc kubenswrapper[4897]: I0228 13:51:34.928025 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:51:34 crc kubenswrapper[4897]: I0228 13:51:34.995382 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:51:35 crc kubenswrapper[4897]: I0228 13:51:35.589002 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:51:36 crc kubenswrapper[4897]: I0228 13:51:36.178034 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-29kqk"] Feb 28 13:51:37 crc kubenswrapper[4897]: I0228 13:51:37.558917 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-29kqk" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerName="registry-server" containerID="cri-o://d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2" gracePeriod=2 Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.075003 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.209154 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wpnn\" (UniqueName: \"kubernetes.io/projected/dbe86f80-68e4-4170-8801-cea07c362d5c-kube-api-access-7wpnn\") pod \"dbe86f80-68e4-4170-8801-cea07c362d5c\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.209256 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-catalog-content\") pod \"dbe86f80-68e4-4170-8801-cea07c362d5c\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.209496 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-utilities\") pod \"dbe86f80-68e4-4170-8801-cea07c362d5c\" (UID: \"dbe86f80-68e4-4170-8801-cea07c362d5c\") " Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.215558 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-utilities" (OuterVolumeSpecName: "utilities") pod "dbe86f80-68e4-4170-8801-cea07c362d5c" (UID: "dbe86f80-68e4-4170-8801-cea07c362d5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.220594 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe86f80-68e4-4170-8801-cea07c362d5c-kube-api-access-7wpnn" (OuterVolumeSpecName: "kube-api-access-7wpnn") pod "dbe86f80-68e4-4170-8801-cea07c362d5c" (UID: "dbe86f80-68e4-4170-8801-cea07c362d5c"). InnerVolumeSpecName "kube-api-access-7wpnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.266865 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbe86f80-68e4-4170-8801-cea07c362d5c" (UID: "dbe86f80-68e4-4170-8801-cea07c362d5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.317376 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wpnn\" (UniqueName: \"kubernetes.io/projected/dbe86f80-68e4-4170-8801-cea07c362d5c-kube-api-access-7wpnn\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.317912 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.317994 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe86f80-68e4-4170-8801-cea07c362d5c-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.573755 4897 generic.go:334] "Generic (PLEG): container finished" podID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerID="d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2" exitCode=0 Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.573948 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29kqk" event={"ID":"dbe86f80-68e4-4170-8801-cea07c362d5c","Type":"ContainerDied","Data":"d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2"} Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.575079 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29kqk" event={"ID":"dbe86f80-68e4-4170-8801-cea07c362d5c","Type":"ContainerDied","Data":"8f12ae6bf42d83a06397d395a482a5a883eb7ee12482efa42dde02011514402e"} Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.573995 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29kqk" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.575146 4897 scope.go:117] "RemoveContainer" containerID="d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.604886 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-29kqk"] Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.609139 4897 scope.go:117] "RemoveContainer" containerID="c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.622561 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-29kqk"] Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.638072 4897 scope.go:117] "RemoveContainer" containerID="62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.712006 4897 scope.go:117] "RemoveContainer" containerID="d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2" Feb 28 13:51:38 crc kubenswrapper[4897]: E0228 13:51:38.712762 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2\": container with ID starting with d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2 not found: ID does not exist" containerID="d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.712980 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2"} err="failed to get container status \"d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2\": rpc error: code = NotFound desc = could not find container \"d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2\": container with ID starting with d714952a47dc287b9ad6697e4c722ebe7987b00f10b905e1157c4aa38bfe9ef2 not found: ID does not exist" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.713168 4897 scope.go:117] "RemoveContainer" containerID="c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c" Feb 28 13:51:38 crc kubenswrapper[4897]: E0228 13:51:38.713817 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c\": container with ID starting with c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c not found: ID does not exist" containerID="c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.713868 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c"} err="failed to get container status \"c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c\": rpc error: code = NotFound desc = could not find container \"c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c\": container with ID starting with c43b6e3abb9daaff42940f3ed8ffc513bc8252a54cb7169e663e5917a037032c not found: ID does not exist" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.713902 4897 scope.go:117] "RemoveContainer" containerID="62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404" Feb 28 13:51:38 crc kubenswrapper[4897]: E0228 13:51:38.714478 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404\": container with ID starting with 62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404 not found: ID does not exist" containerID="62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404" Feb 28 13:51:38 crc kubenswrapper[4897]: I0228 13:51:38.714672 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404"} err="failed to get container status \"62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404\": rpc error: code = NotFound desc = could not find container \"62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404\": container with ID starting with 62590016f7dad910e0b1afff46839a4fef9bbc12baa44ecd069d26cdda823404 not found: ID does not exist" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.591085 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7csvk"] Feb 28 13:51:39 crc kubenswrapper[4897]: E0228 13:51:39.591568 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerName="extract-utilities" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.591582 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerName="extract-utilities" Feb 28 13:51:39 crc kubenswrapper[4897]: E0228 13:51:39.591597 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerName="registry-server" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.591603 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerName="registry-server" Feb 28 13:51:39 crc kubenswrapper[4897]: E0228 13:51:39.591634 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerName="extract-content" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.591640 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerName="extract-content" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.591864 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" containerName="registry-server" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.593970 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.599892 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7csvk"] Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.653073 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-utilities\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.653234 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-catalog-content\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.653300 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9cwl\" (UniqueName: \"kubernetes.io/projected/cf90135f-f275-4f0c-bd20-edf7f13b6989-kube-api-access-x9cwl\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.755446 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-utilities\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.755767 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-catalog-content\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.755808 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9cwl\" (UniqueName: \"kubernetes.io/projected/cf90135f-f275-4f0c-bd20-edf7f13b6989-kube-api-access-x9cwl\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.756599 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-utilities\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.756665 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-catalog-content\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.784092 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9cwl\" (UniqueName: \"kubernetes.io/projected/cf90135f-f275-4f0c-bd20-edf7f13b6989-kube-api-access-x9cwl\") pod \"community-operators-7csvk\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:39 crc kubenswrapper[4897]: I0228 13:51:39.915807 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:40 crc kubenswrapper[4897]: I0228 13:51:40.444832 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7csvk"] Feb 28 13:51:40 crc kubenswrapper[4897]: I0228 13:51:40.482606 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe86f80-68e4-4170-8801-cea07c362d5c" path="/var/lib/kubelet/pods/dbe86f80-68e4-4170-8801-cea07c362d5c/volumes" Feb 28 13:51:40 crc kubenswrapper[4897]: I0228 13:51:40.606345 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7csvk" event={"ID":"cf90135f-f275-4f0c-bd20-edf7f13b6989","Type":"ContainerStarted","Data":"7a95e1ff7eb050289ad53b15701077ce16b6ac4a7040878e25249763743e8718"} Feb 28 13:51:41 crc kubenswrapper[4897]: I0228 13:51:41.620807 4897 generic.go:334] "Generic (PLEG): container finished" podID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerID="e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114" exitCode=0 Feb 28 13:51:41 crc kubenswrapper[4897]: I0228 13:51:41.620912 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7csvk" event={"ID":"cf90135f-f275-4f0c-bd20-edf7f13b6989","Type":"ContainerDied","Data":"e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114"} Feb 28 13:51:42 crc kubenswrapper[4897]: I0228 13:51:42.632699 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7csvk" event={"ID":"cf90135f-f275-4f0c-bd20-edf7f13b6989","Type":"ContainerStarted","Data":"8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309"} Feb 28 13:51:43 crc kubenswrapper[4897]: I0228 13:51:43.645888 4897 generic.go:334] "Generic (PLEG): container finished" podID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerID="8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309" exitCode=0 Feb 28 13:51:43 crc kubenswrapper[4897]: I0228 13:51:43.645946 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7csvk" event={"ID":"cf90135f-f275-4f0c-bd20-edf7f13b6989","Type":"ContainerDied","Data":"8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309"} Feb 28 13:51:44 crc kubenswrapper[4897]: I0228 13:51:44.658028 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7csvk" event={"ID":"cf90135f-f275-4f0c-bd20-edf7f13b6989","Type":"ContainerStarted","Data":"23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b"} Feb 28 13:51:44 crc kubenswrapper[4897]: I0228 13:51:44.685405 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7csvk" podStartSLOduration=3.278308054 podStartE2EDuration="5.685332459s" podCreationTimestamp="2026-02-28 13:51:39 +0000 UTC" firstStartedPulling="2026-02-28 13:51:41.626527112 +0000 UTC m=+2115.868847799" lastFinishedPulling="2026-02-28 13:51:44.033551517 +0000 UTC m=+2118.275872204" observedRunningTime="2026-02-28 13:51:44.675103655 +0000 UTC m=+2118.917424312" watchObservedRunningTime="2026-02-28 13:51:44.685332459 +0000 UTC m=+2118.927653116" Feb 28 13:51:45 crc kubenswrapper[4897]: I0228 13:51:45.695863 4897 scope.go:117] "RemoveContainer" containerID="bfff181b7f363e376980d3482eba267679c8835d63343d7617ad5185eb52f007" Feb 28 13:51:45 crc kubenswrapper[4897]: I0228 13:51:45.756455 4897 scope.go:117] "RemoveContainer" containerID="832f64825502bec11dc1b6cff6d5ee2817b062b1ed1a6c29b27578868a79bca7" Feb 28 13:51:45 crc kubenswrapper[4897]: I0228 13:51:45.800727 4897 scope.go:117] "RemoveContainer" containerID="24d994ad890b0ea5394ef01045d5ac591c2f7a280794ec71bbf87a48590534a5" Feb 28 13:51:49 crc kubenswrapper[4897]: I0228 13:51:49.916865 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:49 crc kubenswrapper[4897]: I0228 13:51:49.917970 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:49 crc kubenswrapper[4897]: I0228 13:51:49.975955 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:50 crc kubenswrapper[4897]: I0228 13:51:50.778143 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:50 crc kubenswrapper[4897]: I0228 13:51:50.856278 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7csvk"] Feb 28 13:51:52 crc kubenswrapper[4897]: I0228 13:51:52.733492 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7csvk" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerName="registry-server" containerID="cri-o://23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b" gracePeriod=2 Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.234341 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.345819 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9cwl\" (UniqueName: \"kubernetes.io/projected/cf90135f-f275-4f0c-bd20-edf7f13b6989-kube-api-access-x9cwl\") pod \"cf90135f-f275-4f0c-bd20-edf7f13b6989\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.346123 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-catalog-content\") pod \"cf90135f-f275-4f0c-bd20-edf7f13b6989\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.346214 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-utilities\") pod \"cf90135f-f275-4f0c-bd20-edf7f13b6989\" (UID: \"cf90135f-f275-4f0c-bd20-edf7f13b6989\") " Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.347661 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-utilities" (OuterVolumeSpecName: "utilities") pod "cf90135f-f275-4f0c-bd20-edf7f13b6989" (UID: "cf90135f-f275-4f0c-bd20-edf7f13b6989"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.353024 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf90135f-f275-4f0c-bd20-edf7f13b6989-kube-api-access-x9cwl" (OuterVolumeSpecName: "kube-api-access-x9cwl") pod "cf90135f-f275-4f0c-bd20-edf7f13b6989" (UID: "cf90135f-f275-4f0c-bd20-edf7f13b6989"). InnerVolumeSpecName "kube-api-access-x9cwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.428645 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf90135f-f275-4f0c-bd20-edf7f13b6989" (UID: "cf90135f-f275-4f0c-bd20-edf7f13b6989"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.451016 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9cwl\" (UniqueName: \"kubernetes.io/projected/cf90135f-f275-4f0c-bd20-edf7f13b6989-kube-api-access-x9cwl\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.451083 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.451097 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf90135f-f275-4f0c-bd20-edf7f13b6989-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.751199 4897 generic.go:334] "Generic (PLEG): container finished" podID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerID="23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b" exitCode=0 Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.751250 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7csvk" event={"ID":"cf90135f-f275-4f0c-bd20-edf7f13b6989","Type":"ContainerDied","Data":"23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b"} Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.751290 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7csvk" event={"ID":"cf90135f-f275-4f0c-bd20-edf7f13b6989","Type":"ContainerDied","Data":"7a95e1ff7eb050289ad53b15701077ce16b6ac4a7040878e25249763743e8718"} Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.751289 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7csvk" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.751331 4897 scope.go:117] "RemoveContainer" containerID="23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.798095 4897 scope.go:117] "RemoveContainer" containerID="8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.807652 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7csvk"] Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.819442 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7csvk"] Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.842991 4897 scope.go:117] "RemoveContainer" containerID="e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.890189 4897 scope.go:117] "RemoveContainer" containerID="23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b" Feb 28 13:51:53 crc kubenswrapper[4897]: E0228 13:51:53.890687 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b\": container with ID starting with 23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b not found: ID does not exist" containerID="23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.890724 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b"} err="failed to get container status \"23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b\": rpc error: code = NotFound desc = could not find container \"23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b\": container with ID starting with 23112d1d47b88e04d3d4e5f221072232fcc297a0bd5b0b9a4ef6d883224ef50b not found: ID does not exist" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.890751 4897 scope.go:117] "RemoveContainer" containerID="8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309" Feb 28 13:51:53 crc kubenswrapper[4897]: E0228 13:51:53.891041 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309\": container with ID starting with 8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309 not found: ID does not exist" containerID="8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.891093 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309"} err="failed to get container status \"8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309\": rpc error: code = NotFound desc = could not find container \"8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309\": container with ID starting with 8fe3a0b30f5b63f075e165651c92fe0574f351dac4f81d377d8c3f28f0543309 not found: ID does not exist" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.891130 4897 scope.go:117] "RemoveContainer" containerID="e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114" Feb 28 13:51:53 crc kubenswrapper[4897]: E0228 13:51:53.891757 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114\": container with ID starting with e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114 not found: ID does not exist" containerID="e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114" Feb 28 13:51:53 crc kubenswrapper[4897]: I0228 13:51:53.891791 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114"} err="failed to get container status \"e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114\": rpc error: code = NotFound desc = could not find container \"e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114\": container with ID starting with e039fe5c4f58b4adeb9db56c9011274f85aa24d38be450500e6dccd409f01114 not found: ID does not exist" Feb 28 13:51:54 crc kubenswrapper[4897]: I0228 13:51:54.471480 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" path="/var/lib/kubelet/pods/cf90135f-f275-4f0c-bd20-edf7f13b6989/volumes" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.146516 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538112-f95dp"] Feb 28 13:52:00 crc kubenswrapper[4897]: E0228 13:52:00.148626 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerName="extract-content" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.148800 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerName="extract-content" Feb 28 13:52:00 crc kubenswrapper[4897]: E0228 13:52:00.148894 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerName="extract-utilities" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.148977 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerName="extract-utilities" Feb 28 13:52:00 crc kubenswrapper[4897]: E0228 13:52:00.149090 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerName="registry-server" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.149213 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerName="registry-server" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.149550 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf90135f-f275-4f0c-bd20-edf7f13b6989" containerName="registry-server" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.150576 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538112-f95dp" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.154090 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.154207 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.154224 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.157252 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538112-f95dp"] Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.303366 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvdqz\" (UniqueName: \"kubernetes.io/projected/22a79300-52db-4f33-b565-125005a95021-kube-api-access-mvdqz\") pod \"auto-csr-approver-29538112-f95dp\" (UID: \"22a79300-52db-4f33-b565-125005a95021\") " pod="openshift-infra/auto-csr-approver-29538112-f95dp" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.405652 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvdqz\" (UniqueName: \"kubernetes.io/projected/22a79300-52db-4f33-b565-125005a95021-kube-api-access-mvdqz\") pod \"auto-csr-approver-29538112-f95dp\" (UID: \"22a79300-52db-4f33-b565-125005a95021\") " pod="openshift-infra/auto-csr-approver-29538112-f95dp" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.441254 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvdqz\" (UniqueName: \"kubernetes.io/projected/22a79300-52db-4f33-b565-125005a95021-kube-api-access-mvdqz\") pod \"auto-csr-approver-29538112-f95dp\" (UID: \"22a79300-52db-4f33-b565-125005a95021\") " pod="openshift-infra/auto-csr-approver-29538112-f95dp" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.477760 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538112-f95dp" Feb 28 13:52:00 crc kubenswrapper[4897]: I0228 13:52:00.962742 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538112-f95dp"] Feb 28 13:52:01 crc kubenswrapper[4897]: I0228 13:52:01.826870 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538112-f95dp" event={"ID":"22a79300-52db-4f33-b565-125005a95021","Type":"ContainerStarted","Data":"c8e41b6e4151e897b100fe8954fa7951481a2d200210cfb9b065270258efb0eb"} Feb 28 13:52:02 crc kubenswrapper[4897]: I0228 13:52:02.846244 4897 generic.go:334] "Generic (PLEG): container finished" podID="22a79300-52db-4f33-b565-125005a95021" containerID="0d878fc6eb4f3e478512721e3560bf7a2bd1a288cfe668810dd17fb860df10f2" exitCode=0 Feb 28 13:52:02 crc kubenswrapper[4897]: I0228 13:52:02.846341 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538112-f95dp" event={"ID":"22a79300-52db-4f33-b565-125005a95021","Type":"ContainerDied","Data":"0d878fc6eb4f3e478512721e3560bf7a2bd1a288cfe668810dd17fb860df10f2"} Feb 28 13:52:04 crc kubenswrapper[4897]: I0228 13:52:04.232165 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538112-f95dp" Feb 28 13:52:04 crc kubenswrapper[4897]: I0228 13:52:04.384863 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvdqz\" (UniqueName: \"kubernetes.io/projected/22a79300-52db-4f33-b565-125005a95021-kube-api-access-mvdqz\") pod \"22a79300-52db-4f33-b565-125005a95021\" (UID: \"22a79300-52db-4f33-b565-125005a95021\") " Feb 28 13:52:04 crc kubenswrapper[4897]: I0228 13:52:04.392877 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a79300-52db-4f33-b565-125005a95021-kube-api-access-mvdqz" (OuterVolumeSpecName: "kube-api-access-mvdqz") pod "22a79300-52db-4f33-b565-125005a95021" (UID: "22a79300-52db-4f33-b565-125005a95021"). InnerVolumeSpecName "kube-api-access-mvdqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:52:04 crc kubenswrapper[4897]: I0228 13:52:04.491248 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvdqz\" (UniqueName: \"kubernetes.io/projected/22a79300-52db-4f33-b565-125005a95021-kube-api-access-mvdqz\") on node \"crc\" DevicePath \"\"" Feb 28 13:52:04 crc kubenswrapper[4897]: I0228 13:52:04.870104 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538112-f95dp" event={"ID":"22a79300-52db-4f33-b565-125005a95021","Type":"ContainerDied","Data":"c8e41b6e4151e897b100fe8954fa7951481a2d200210cfb9b065270258efb0eb"} Feb 28 13:52:04 crc kubenswrapper[4897]: I0228 13:52:04.870182 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e41b6e4151e897b100fe8954fa7951481a2d200210cfb9b065270258efb0eb" Feb 28 13:52:04 crc kubenswrapper[4897]: I0228 13:52:04.870184 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538112-f95dp" Feb 28 13:52:05 crc kubenswrapper[4897]: I0228 13:52:05.318685 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538106-j6xmh"] Feb 28 13:52:05 crc kubenswrapper[4897]: I0228 13:52:05.326545 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538106-j6xmh"] Feb 28 13:52:06 crc kubenswrapper[4897]: I0228 13:52:06.471735 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64df0e32-0a86-4721-bf82-f6629e2268d8" path="/var/lib/kubelet/pods/64df0e32-0a86-4721-bf82-f6629e2268d8/volumes" Feb 28 13:52:45 crc kubenswrapper[4897]: I0228 13:52:45.977253 4897 scope.go:117] "RemoveContainer" containerID="45905544f1c45db0d58c7fbe4a464cb80d70c376a540d7ec631109337f1bcd4c" Feb 28 13:53:00 crc kubenswrapper[4897]: I0228 13:53:00.456841 4897 generic.go:334] "Generic (PLEG): container finished" podID="9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa" containerID="0b65f09d8ff6e0b9288e657c847b06d4f79ff37b69981cf870bca6795c08521d" exitCode=0 Feb 28 13:53:00 crc kubenswrapper[4897]: I0228 13:53:00.474957 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" event={"ID":"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa","Type":"ContainerDied","Data":"0b65f09d8ff6e0b9288e657c847b06d4f79ff37b69981cf870bca6795c08521d"} Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.004893 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.089031 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-ssh-key-openstack-edpm-ipam\") pod \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.089082 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcdpb\" (UniqueName: \"kubernetes.io/projected/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-kube-api-access-tcdpb\") pod \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.089174 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-inventory\") pod \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\" (UID: \"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa\") " Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.130509 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-kube-api-access-tcdpb" (OuterVolumeSpecName: "kube-api-access-tcdpb") pod "9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa" (UID: "9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa"). InnerVolumeSpecName "kube-api-access-tcdpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.158978 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa" (UID: "9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.164467 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-inventory" (OuterVolumeSpecName: "inventory") pod "9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa" (UID: "9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.192213 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.192262 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcdpb\" (UniqueName: \"kubernetes.io/projected/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-kube-api-access-tcdpb\") on node \"crc\" DevicePath \"\"" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.192276 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.478844 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" event={"ID":"9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa","Type":"ContainerDied","Data":"14ad6181f029891f64ee01550d33fd0f70c96aeb8bf408cda5a61b527e444ecd"} Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.478880 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ad6181f029891f64ee01550d33fd0f70c96aeb8bf408cda5a61b527e444ecd" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.478944 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-x2blf" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.655863 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w"] Feb 28 13:53:02 crc kubenswrapper[4897]: E0228 13:53:02.656240 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a79300-52db-4f33-b565-125005a95021" containerName="oc" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.656258 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a79300-52db-4f33-b565-125005a95021" containerName="oc" Feb 28 13:53:02 crc kubenswrapper[4897]: E0228 13:53:02.656268 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.656275 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.656472 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a79300-52db-4f33-b565-125005a95021" containerName="oc" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.656491 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.657148 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.658993 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.659572 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.659726 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.660677 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.677454 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w"] Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.804231 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.804359 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brppc\" (UniqueName: \"kubernetes.io/projected/474a32f3-7317-40c6-80cb-6e36415a2d5d-kube-api-access-brppc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.804771 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.906733 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.906839 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.906866 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brppc\" (UniqueName: \"kubernetes.io/projected/474a32f3-7317-40c6-80cb-6e36415a2d5d-kube-api-access-brppc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.921564 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.921680 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.943356 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brppc\" (UniqueName: \"kubernetes.io/projected/474a32f3-7317-40c6-80cb-6e36415a2d5d-kube-api-access-brppc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9245w\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:02 crc kubenswrapper[4897]: I0228 13:53:02.972584 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:53:03 crc kubenswrapper[4897]: I0228 13:53:03.594370 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w"] Feb 28 13:53:04 crc kubenswrapper[4897]: I0228 13:53:04.498291 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" event={"ID":"474a32f3-7317-40c6-80cb-6e36415a2d5d","Type":"ContainerStarted","Data":"93353228aba8dfb3e0db6e217706834ad2bc247b2532760fa39dc45087d542fd"} Feb 28 13:53:04 crc kubenswrapper[4897]: I0228 13:53:04.498378 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" event={"ID":"474a32f3-7317-40c6-80cb-6e36415a2d5d","Type":"ContainerStarted","Data":"9d9614ec124d25bc1f86892df18b5f8c7e5812fd92bb8f464c726a5ddf7c45f8"} Feb 28 13:53:04 crc kubenswrapper[4897]: I0228 13:53:04.534736 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" podStartSLOduration=1.966961379 podStartE2EDuration="2.534709095s" podCreationTimestamp="2026-02-28 13:53:02 +0000 UTC" firstStartedPulling="2026-02-28 13:53:03.601047787 +0000 UTC m=+2197.843368444" lastFinishedPulling="2026-02-28 13:53:04.168795483 +0000 UTC m=+2198.411116160" observedRunningTime="2026-02-28 13:53:04.518023505 +0000 UTC m=+2198.760344162" watchObservedRunningTime="2026-02-28 13:53:04.534709095 +0000 UTC m=+2198.777029802" Feb 28 13:53:07 crc kubenswrapper[4897]: I0228 13:53:07.989071 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4z57n"] Feb 28 13:53:07 crc kubenswrapper[4897]: I0228 13:53:07.991546 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.002359 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z57n"] Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.113717 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-catalog-content\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.113865 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-utilities\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.113906 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4sx2\" (UniqueName: \"kubernetes.io/projected/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-kube-api-access-k4sx2\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.215350 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-catalog-content\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.215476 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-utilities\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.215524 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4sx2\" (UniqueName: \"kubernetes.io/projected/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-kube-api-access-k4sx2\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.215829 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-catalog-content\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.215995 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-utilities\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.237932 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4sx2\" (UniqueName: \"kubernetes.io/projected/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-kube-api-access-k4sx2\") pod \"redhat-marketplace-4z57n\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.315648 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:08 crc kubenswrapper[4897]: W0228 13:53:08.805852 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba8c2a41_ddb6_4848_87e0_92b38f11bafa.slice/crio-79cd6b83ce702260b44efd27dc2fba8ca2edc6577a01684b41362f5e21fae59a WatchSource:0}: Error finding container 79cd6b83ce702260b44efd27dc2fba8ca2edc6577a01684b41362f5e21fae59a: Status 404 returned error can't find the container with id 79cd6b83ce702260b44efd27dc2fba8ca2edc6577a01684b41362f5e21fae59a Feb 28 13:53:08 crc kubenswrapper[4897]: I0228 13:53:08.813271 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z57n"] Feb 28 13:53:09 crc kubenswrapper[4897]: I0228 13:53:09.545108 4897 generic.go:334] "Generic (PLEG): container finished" podID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerID="8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e" exitCode=0 Feb 28 13:53:09 crc kubenswrapper[4897]: I0228 13:53:09.545409 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z57n" event={"ID":"ba8c2a41-ddb6-4848-87e0-92b38f11bafa","Type":"ContainerDied","Data":"8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e"} Feb 28 13:53:09 crc kubenswrapper[4897]: I0228 13:53:09.545434 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z57n" event={"ID":"ba8c2a41-ddb6-4848-87e0-92b38f11bafa","Type":"ContainerStarted","Data":"79cd6b83ce702260b44efd27dc2fba8ca2edc6577a01684b41362f5e21fae59a"} Feb 28 13:53:10 crc kubenswrapper[4897]: I0228 13:53:10.560010 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z57n" event={"ID":"ba8c2a41-ddb6-4848-87e0-92b38f11bafa","Type":"ContainerStarted","Data":"63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae"} Feb 28 13:53:11 crc kubenswrapper[4897]: I0228 13:53:11.571147 4897 generic.go:334] "Generic (PLEG): container finished" podID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerID="63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae" exitCode=0 Feb 28 13:53:11 crc kubenswrapper[4897]: I0228 13:53:11.571235 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z57n" event={"ID":"ba8c2a41-ddb6-4848-87e0-92b38f11bafa","Type":"ContainerDied","Data":"63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae"} Feb 28 13:53:12 crc kubenswrapper[4897]: I0228 13:53:12.605586 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z57n" event={"ID":"ba8c2a41-ddb6-4848-87e0-92b38f11bafa","Type":"ContainerStarted","Data":"0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e"} Feb 28 13:53:12 crc kubenswrapper[4897]: I0228 13:53:12.638286 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4z57n" podStartSLOduration=3.242068453 podStartE2EDuration="5.638265231s" podCreationTimestamp="2026-02-28 13:53:07 +0000 UTC" firstStartedPulling="2026-02-28 13:53:09.546905109 +0000 UTC m=+2203.789225766" lastFinishedPulling="2026-02-28 13:53:11.943101887 +0000 UTC m=+2206.185422544" observedRunningTime="2026-02-28 13:53:12.633276593 +0000 UTC m=+2206.875597250" watchObservedRunningTime="2026-02-28 13:53:12.638265231 +0000 UTC m=+2206.880585898" Feb 28 13:53:18 crc kubenswrapper[4897]: I0228 13:53:18.316514 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:18 crc kubenswrapper[4897]: I0228 13:53:18.317093 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:18 crc kubenswrapper[4897]: I0228 13:53:18.376470 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:18 crc kubenswrapper[4897]: I0228 13:53:18.723627 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:21 crc kubenswrapper[4897]: I0228 13:53:21.980392 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z57n"] Feb 28 13:53:21 crc kubenswrapper[4897]: I0228 13:53:21.981307 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4z57n" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerName="registry-server" containerID="cri-o://0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e" gracePeriod=2 Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.547484 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.656421 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4sx2\" (UniqueName: \"kubernetes.io/projected/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-kube-api-access-k4sx2\") pod \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.657416 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-catalog-content\") pod \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.657561 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-utilities\") pod \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\" (UID: \"ba8c2a41-ddb6-4848-87e0-92b38f11bafa\") " Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.658406 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-utilities" (OuterVolumeSpecName: "utilities") pod "ba8c2a41-ddb6-4848-87e0-92b38f11bafa" (UID: "ba8c2a41-ddb6-4848-87e0-92b38f11bafa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.663303 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-kube-api-access-k4sx2" (OuterVolumeSpecName: "kube-api-access-k4sx2") pod "ba8c2a41-ddb6-4848-87e0-92b38f11bafa" (UID: "ba8c2a41-ddb6-4848-87e0-92b38f11bafa"). InnerVolumeSpecName "kube-api-access-k4sx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.691860 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba8c2a41-ddb6-4848-87e0-92b38f11bafa" (UID: "ba8c2a41-ddb6-4848-87e0-92b38f11bafa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.711549 4897 generic.go:334] "Generic (PLEG): container finished" podID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerID="0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e" exitCode=0 Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.711600 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z57n" event={"ID":"ba8c2a41-ddb6-4848-87e0-92b38f11bafa","Type":"ContainerDied","Data":"0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e"} Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.711635 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z57n" event={"ID":"ba8c2a41-ddb6-4848-87e0-92b38f11bafa","Type":"ContainerDied","Data":"79cd6b83ce702260b44efd27dc2fba8ca2edc6577a01684b41362f5e21fae59a"} Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.711655 4897 scope.go:117] "RemoveContainer" containerID="0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.711971 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4z57n" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.738217 4897 scope.go:117] "RemoveContainer" containerID="63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.756835 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z57n"] Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.759597 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.759615 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.759626 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4sx2\" (UniqueName: \"kubernetes.io/projected/ba8c2a41-ddb6-4848-87e0-92b38f11bafa-kube-api-access-k4sx2\") on node \"crc\" DevicePath \"\"" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.766879 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z57n"] Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.785725 4897 scope.go:117] "RemoveContainer" containerID="8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.807192 4897 scope.go:117] "RemoveContainer" containerID="0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e" Feb 28 13:53:22 crc kubenswrapper[4897]: E0228 13:53:22.807677 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e\": container with ID starting with 0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e not found: ID does not exist" containerID="0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.807715 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e"} err="failed to get container status \"0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e\": rpc error: code = NotFound desc = could not find container \"0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e\": container with ID starting with 0037406ac508aba2c5ab34e29e0b260fc0e286e21afcf49f7742bf8e5a0bcd7e not found: ID does not exist" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.807737 4897 scope.go:117] "RemoveContainer" containerID="63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae" Feb 28 13:53:22 crc kubenswrapper[4897]: E0228 13:53:22.808054 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae\": container with ID starting with 63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae not found: ID does not exist" containerID="63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.808119 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae"} err="failed to get container status \"63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae\": rpc error: code = NotFound desc = could not find container \"63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae\": container with ID starting with 63e18d1b5beee4f002c147b53b0dbb05ecbe1b7a2cf5bf94f602ca4bf5cf95ae not found: ID does not exist" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.808152 4897 scope.go:117] "RemoveContainer" containerID="8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e" Feb 28 13:53:22 crc kubenswrapper[4897]: E0228 13:53:22.808667 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e\": container with ID starting with 8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e not found: ID does not exist" containerID="8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e" Feb 28 13:53:22 crc kubenswrapper[4897]: I0228 13:53:22.808694 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e"} err="failed to get container status \"8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e\": rpc error: code = NotFound desc = could not find container \"8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e\": container with ID starting with 8101e3b47b666cdcdbcbd750dea42de8bc8fdcdfac3efbd51dd4abebcafde19e not found: ID does not exist" Feb 28 13:53:24 crc kubenswrapper[4897]: I0228 13:53:24.469010 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" path="/var/lib/kubelet/pods/ba8c2a41-ddb6-4848-87e0-92b38f11bafa/volumes" Feb 28 13:53:33 crc kubenswrapper[4897]: I0228 13:53:33.370891 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:53:33 crc kubenswrapper[4897]: I0228 13:53:33.371383 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.180452 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538114-fhwzf"] Feb 28 13:54:00 crc kubenswrapper[4897]: E0228 13:54:00.181584 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerName="extract-utilities" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.181604 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerName="extract-utilities" Feb 28 13:54:00 crc kubenswrapper[4897]: E0228 13:54:00.181632 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerName="extract-content" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.181640 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerName="extract-content" Feb 28 13:54:00 crc kubenswrapper[4897]: E0228 13:54:00.181674 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerName="registry-server" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.181683 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerName="registry-server" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.181923 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8c2a41-ddb6-4848-87e0-92b38f11bafa" containerName="registry-server" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.182912 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.186335 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.187116 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.187124 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.191963 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538114-fhwzf"] Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.206495 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8d2c\" (UniqueName: \"kubernetes.io/projected/9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7-kube-api-access-t8d2c\") pod \"auto-csr-approver-29538114-fhwzf\" (UID: \"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7\") " pod="openshift-infra/auto-csr-approver-29538114-fhwzf" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.308157 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8d2c\" (UniqueName: \"kubernetes.io/projected/9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7-kube-api-access-t8d2c\") pod \"auto-csr-approver-29538114-fhwzf\" (UID: \"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7\") " pod="openshift-infra/auto-csr-approver-29538114-fhwzf" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.340343 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8d2c\" (UniqueName: \"kubernetes.io/projected/9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7-kube-api-access-t8d2c\") pod \"auto-csr-approver-29538114-fhwzf\" (UID: \"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7\") " pod="openshift-infra/auto-csr-approver-29538114-fhwzf" Feb 28 13:54:00 crc kubenswrapper[4897]: I0228 13:54:00.517050 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" Feb 28 13:54:01 crc kubenswrapper[4897]: I0228 13:54:01.012256 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538114-fhwzf"] Feb 28 13:54:01 crc kubenswrapper[4897]: I0228 13:54:01.149122 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" event={"ID":"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7","Type":"ContainerStarted","Data":"d7c86da2d3e57238d1ebbbc584d8c2c106bc003565bff2d44a8710ec028e9e1f"} Feb 28 13:54:02 crc kubenswrapper[4897]: I0228 13:54:02.160296 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" event={"ID":"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7","Type":"ContainerStarted","Data":"6ad5828a8bebd288c060af553f20e731d13d0338bf7dfb913456e608ea62a8d1"} Feb 28 13:54:02 crc kubenswrapper[4897]: I0228 13:54:02.179412 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" podStartSLOduration=1.389407852 podStartE2EDuration="2.17939287s" podCreationTimestamp="2026-02-28 13:54:00 +0000 UTC" firstStartedPulling="2026-02-28 13:54:01.02046088 +0000 UTC m=+2255.262781537" lastFinishedPulling="2026-02-28 13:54:01.810445858 +0000 UTC m=+2256.052766555" observedRunningTime="2026-02-28 13:54:02.177250001 +0000 UTC m=+2256.419570658" watchObservedRunningTime="2026-02-28 13:54:02.17939287 +0000 UTC m=+2256.421713527" Feb 28 13:54:03 crc kubenswrapper[4897]: I0228 13:54:03.179961 4897 generic.go:334] "Generic (PLEG): container finished" podID="9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7" containerID="6ad5828a8bebd288c060af553f20e731d13d0338bf7dfb913456e608ea62a8d1" exitCode=0 Feb 28 13:54:03 crc kubenswrapper[4897]: I0228 13:54:03.180078 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" event={"ID":"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7","Type":"ContainerDied","Data":"6ad5828a8bebd288c060af553f20e731d13d0338bf7dfb913456e608ea62a8d1"} Feb 28 13:54:03 crc kubenswrapper[4897]: I0228 13:54:03.371793 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:54:03 crc kubenswrapper[4897]: I0228 13:54:03.372440 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:54:04 crc kubenswrapper[4897]: I0228 13:54:04.686644 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" Feb 28 13:54:04 crc kubenswrapper[4897]: I0228 13:54:04.700995 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8d2c\" (UniqueName: \"kubernetes.io/projected/9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7-kube-api-access-t8d2c\") pod \"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7\" (UID: \"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7\") " Feb 28 13:54:04 crc kubenswrapper[4897]: I0228 13:54:04.713580 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7-kube-api-access-t8d2c" (OuterVolumeSpecName: "kube-api-access-t8d2c") pod "9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7" (UID: "9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7"). InnerVolumeSpecName "kube-api-access-t8d2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:54:04 crc kubenswrapper[4897]: I0228 13:54:04.803345 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8d2c\" (UniqueName: \"kubernetes.io/projected/9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7-kube-api-access-t8d2c\") on node \"crc\" DevicePath \"\"" Feb 28 13:54:05 crc kubenswrapper[4897]: I0228 13:54:05.207833 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" event={"ID":"9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7","Type":"ContainerDied","Data":"d7c86da2d3e57238d1ebbbc584d8c2c106bc003565bff2d44a8710ec028e9e1f"} Feb 28 13:54:05 crc kubenswrapper[4897]: I0228 13:54:05.207892 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c86da2d3e57238d1ebbbc584d8c2c106bc003565bff2d44a8710ec028e9e1f" Feb 28 13:54:05 crc kubenswrapper[4897]: I0228 13:54:05.207958 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538114-fhwzf" Feb 28 13:54:05 crc kubenswrapper[4897]: I0228 13:54:05.295206 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538108-57hh6"] Feb 28 13:54:05 crc kubenswrapper[4897]: I0228 13:54:05.317399 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538108-57hh6"] Feb 28 13:54:06 crc kubenswrapper[4897]: I0228 13:54:06.476783 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29bcedc-1106-4d1a-b5d9-af0aa213a88e" path="/var/lib/kubelet/pods/b29bcedc-1106-4d1a-b5d9-af0aa213a88e/volumes" Feb 28 13:54:13 crc kubenswrapper[4897]: I0228 13:54:13.310610 4897 generic.go:334] "Generic (PLEG): container finished" podID="474a32f3-7317-40c6-80cb-6e36415a2d5d" containerID="93353228aba8dfb3e0db6e217706834ad2bc247b2532760fa39dc45087d542fd" exitCode=0 Feb 28 13:54:13 crc kubenswrapper[4897]: I0228 13:54:13.310783 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" event={"ID":"474a32f3-7317-40c6-80cb-6e36415a2d5d","Type":"ContainerDied","Data":"93353228aba8dfb3e0db6e217706834ad2bc247b2532760fa39dc45087d542fd"} Feb 28 13:54:14 crc kubenswrapper[4897]: I0228 13:54:14.945016 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.143065 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-ssh-key-openstack-edpm-ipam\") pod \"474a32f3-7317-40c6-80cb-6e36415a2d5d\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.143964 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-inventory\") pod \"474a32f3-7317-40c6-80cb-6e36415a2d5d\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.144226 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brppc\" (UniqueName: \"kubernetes.io/projected/474a32f3-7317-40c6-80cb-6e36415a2d5d-kube-api-access-brppc\") pod \"474a32f3-7317-40c6-80cb-6e36415a2d5d\" (UID: \"474a32f3-7317-40c6-80cb-6e36415a2d5d\") " Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.151206 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474a32f3-7317-40c6-80cb-6e36415a2d5d-kube-api-access-brppc" (OuterVolumeSpecName: "kube-api-access-brppc") pod "474a32f3-7317-40c6-80cb-6e36415a2d5d" (UID: "474a32f3-7317-40c6-80cb-6e36415a2d5d"). InnerVolumeSpecName "kube-api-access-brppc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.177837 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-inventory" (OuterVolumeSpecName: "inventory") pod "474a32f3-7317-40c6-80cb-6e36415a2d5d" (UID: "474a32f3-7317-40c6-80cb-6e36415a2d5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.197891 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "474a32f3-7317-40c6-80cb-6e36415a2d5d" (UID: "474a32f3-7317-40c6-80cb-6e36415a2d5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.247825 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.247895 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brppc\" (UniqueName: \"kubernetes.io/projected/474a32f3-7317-40c6-80cb-6e36415a2d5d-kube-api-access-brppc\") on node \"crc\" DevicePath \"\"" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.247924 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/474a32f3-7317-40c6-80cb-6e36415a2d5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.338803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" event={"ID":"474a32f3-7317-40c6-80cb-6e36415a2d5d","Type":"ContainerDied","Data":"9d9614ec124d25bc1f86892df18b5f8c7e5812fd92bb8f464c726a5ddf7c45f8"} Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.338846 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d9614ec124d25bc1f86892df18b5f8c7e5812fd92bb8f464c726a5ddf7c45f8" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.338903 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9245w" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.448289 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds"] Feb 28 13:54:15 crc kubenswrapper[4897]: E0228 13:54:15.448965 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474a32f3-7317-40c6-80cb-6e36415a2d5d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.448993 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="474a32f3-7317-40c6-80cb-6e36415a2d5d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 13:54:15 crc kubenswrapper[4897]: E0228 13:54:15.449013 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7" containerName="oc" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.449026 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7" containerName="oc" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.449384 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7" containerName="oc" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.449433 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="474a32f3-7317-40c6-80cb-6e36415a2d5d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.450618 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.455818 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.456163 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.456250 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.456291 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.459249 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds"] Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.585952 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bx8t\" (UniqueName: \"kubernetes.io/projected/81fd26ee-0f11-49a1-863c-86aefccd7f6d-kube-api-access-5bx8t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.586394 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.586560 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.690205 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bx8t\" (UniqueName: \"kubernetes.io/projected/81fd26ee-0f11-49a1-863c-86aefccd7f6d-kube-api-access-5bx8t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.690623 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.690738 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.695185 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.696027 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.714789 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bx8t\" (UniqueName: \"kubernetes.io/projected/81fd26ee-0f11-49a1-863c-86aefccd7f6d-kube-api-access-5bx8t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:15 crc kubenswrapper[4897]: I0228 13:54:15.777451 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:16 crc kubenswrapper[4897]: I0228 13:54:16.368295 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds"] Feb 28 13:54:16 crc kubenswrapper[4897]: W0228 13:54:16.371404 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81fd26ee_0f11_49a1_863c_86aefccd7f6d.slice/crio-e6943a4009133e4efb5177ed0625b4a6d9328d8fd85589bb9ba8d7e41aeb7230 WatchSource:0}: Error finding container e6943a4009133e4efb5177ed0625b4a6d9328d8fd85589bb9ba8d7e41aeb7230: Status 404 returned error can't find the container with id e6943a4009133e4efb5177ed0625b4a6d9328d8fd85589bb9ba8d7e41aeb7230 Feb 28 13:54:17 crc kubenswrapper[4897]: I0228 13:54:17.366627 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" event={"ID":"81fd26ee-0f11-49a1-863c-86aefccd7f6d","Type":"ContainerStarted","Data":"6866593d52fe216e56bd3981dbc93d1ac678e2f12cf3c74b0ad988b0ace669af"} Feb 28 13:54:17 crc kubenswrapper[4897]: I0228 13:54:17.367009 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" event={"ID":"81fd26ee-0f11-49a1-863c-86aefccd7f6d","Type":"ContainerStarted","Data":"e6943a4009133e4efb5177ed0625b4a6d9328d8fd85589bb9ba8d7e41aeb7230"} Feb 28 13:54:17 crc kubenswrapper[4897]: I0228 13:54:17.393671 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" podStartSLOduration=2.008303376 podStartE2EDuration="2.393644519s" podCreationTimestamp="2026-02-28 13:54:15 +0000 UTC" firstStartedPulling="2026-02-28 13:54:16.374231626 +0000 UTC m=+2270.616552293" lastFinishedPulling="2026-02-28 13:54:16.759572739 +0000 UTC m=+2271.001893436" observedRunningTime="2026-02-28 13:54:17.392150778 +0000 UTC m=+2271.634471475" watchObservedRunningTime="2026-02-28 13:54:17.393644519 +0000 UTC m=+2271.635965216" Feb 28 13:54:22 crc kubenswrapper[4897]: I0228 13:54:22.421515 4897 generic.go:334] "Generic (PLEG): container finished" podID="81fd26ee-0f11-49a1-863c-86aefccd7f6d" containerID="6866593d52fe216e56bd3981dbc93d1ac678e2f12cf3c74b0ad988b0ace669af" exitCode=0 Feb 28 13:54:22 crc kubenswrapper[4897]: I0228 13:54:22.421605 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" event={"ID":"81fd26ee-0f11-49a1-863c-86aefccd7f6d","Type":"ContainerDied","Data":"6866593d52fe216e56bd3981dbc93d1ac678e2f12cf3c74b0ad988b0ace669af"} Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.014893 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.041046 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-inventory\") pod \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.041359 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-ssh-key-openstack-edpm-ipam\") pod \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.041442 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bx8t\" (UniqueName: \"kubernetes.io/projected/81fd26ee-0f11-49a1-863c-86aefccd7f6d-kube-api-access-5bx8t\") pod \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\" (UID: \"81fd26ee-0f11-49a1-863c-86aefccd7f6d\") " Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.048347 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81fd26ee-0f11-49a1-863c-86aefccd7f6d-kube-api-access-5bx8t" (OuterVolumeSpecName: "kube-api-access-5bx8t") pod "81fd26ee-0f11-49a1-863c-86aefccd7f6d" (UID: "81fd26ee-0f11-49a1-863c-86aefccd7f6d"). InnerVolumeSpecName "kube-api-access-5bx8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.072809 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "81fd26ee-0f11-49a1-863c-86aefccd7f6d" (UID: "81fd26ee-0f11-49a1-863c-86aefccd7f6d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.081585 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-inventory" (OuterVolumeSpecName: "inventory") pod "81fd26ee-0f11-49a1-863c-86aefccd7f6d" (UID: "81fd26ee-0f11-49a1-863c-86aefccd7f6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.144452 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.144491 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bx8t\" (UniqueName: \"kubernetes.io/projected/81fd26ee-0f11-49a1-863c-86aefccd7f6d-kube-api-access-5bx8t\") on node \"crc\" DevicePath \"\"" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.144502 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fd26ee-0f11-49a1-863c-86aefccd7f6d-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.446078 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" event={"ID":"81fd26ee-0f11-49a1-863c-86aefccd7f6d","Type":"ContainerDied","Data":"e6943a4009133e4efb5177ed0625b4a6d9328d8fd85589bb9ba8d7e41aeb7230"} Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.446135 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6943a4009133e4efb5177ed0625b4a6d9328d8fd85589bb9ba8d7e41aeb7230" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.446142 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.547513 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9"] Feb 28 13:54:24 crc kubenswrapper[4897]: E0228 13:54:24.548107 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81fd26ee-0f11-49a1-863c-86aefccd7f6d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.548138 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="81fd26ee-0f11-49a1-863c-86aefccd7f6d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.549594 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="81fd26ee-0f11-49a1-863c-86aefccd7f6d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.552443 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.558385 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.558584 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.560737 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.560997 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.561459 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9"] Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.655655 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z99sd\" (UniqueName: \"kubernetes.io/projected/8651da53-e976-4395-964b-a5c077d64a26-kube-api-access-z99sd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.655716 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.655924 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.759082 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.759238 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z99sd\" (UniqueName: \"kubernetes.io/projected/8651da53-e976-4395-964b-a5c077d64a26-kube-api-access-z99sd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.759299 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.764389 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.764474 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.784524 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z99sd\" (UniqueName: \"kubernetes.io/projected/8651da53-e976-4395-964b-a5c077d64a26-kube-api-access-z99sd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-d4ng9\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:24 crc kubenswrapper[4897]: I0228 13:54:24.890844 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:54:25 crc kubenswrapper[4897]: I0228 13:54:25.514868 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9"] Feb 28 13:54:26 crc kubenswrapper[4897]: I0228 13:54:26.492488 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" event={"ID":"8651da53-e976-4395-964b-a5c077d64a26","Type":"ContainerStarted","Data":"7e385ed5618533a49d4b64f4ddce459a1c788e2b930147f485bd20fd79c05d9d"} Feb 28 13:54:26 crc kubenswrapper[4897]: I0228 13:54:26.493050 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" event={"ID":"8651da53-e976-4395-964b-a5c077d64a26","Type":"ContainerStarted","Data":"f99d0e693f2d4d1608bf4f7a3e795bce05743cfe638e65715653d0e025c0cc54"} Feb 28 13:54:26 crc kubenswrapper[4897]: I0228 13:54:26.559682 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" podStartSLOduration=2.065235962 podStartE2EDuration="2.559654782s" podCreationTimestamp="2026-02-28 13:54:24 +0000 UTC" firstStartedPulling="2026-02-28 13:54:25.524726422 +0000 UTC m=+2279.767047099" lastFinishedPulling="2026-02-28 13:54:26.019145252 +0000 UTC m=+2280.261465919" observedRunningTime="2026-02-28 13:54:26.545010689 +0000 UTC m=+2280.787331386" watchObservedRunningTime="2026-02-28 13:54:26.559654782 +0000 UTC m=+2280.801975459" Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.370999 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.371766 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.371848 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.372656 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.372759 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" gracePeriod=600 Feb 28 13:54:33 crc kubenswrapper[4897]: E0228 13:54:33.502343 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.548686 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" exitCode=0 Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.548740 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e"} Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.548789 4897 scope.go:117] "RemoveContainer" containerID="68cbc528fc9ee62676935060f8ad57ccdbb15ff6bc6647175367c2eeaa5ffc16" Feb 28 13:54:33 crc kubenswrapper[4897]: I0228 13:54:33.549562 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:54:33 crc kubenswrapper[4897]: E0228 13:54:33.549952 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:54:46 crc kubenswrapper[4897]: I0228 13:54:46.149472 4897 scope.go:117] "RemoveContainer" containerID="83e64d8b70cfaa6eeb51d930bb1e1ba871717813b303bea4f44c427d0102ad13" Feb 28 13:54:48 crc kubenswrapper[4897]: I0228 13:54:48.456607 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:54:48 crc kubenswrapper[4897]: E0228 13:54:48.457165 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:55:00 crc kubenswrapper[4897]: I0228 13:55:00.456385 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:55:00 crc kubenswrapper[4897]: E0228 13:55:00.457664 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:55:07 crc kubenswrapper[4897]: I0228 13:55:07.942939 4897 generic.go:334] "Generic (PLEG): container finished" podID="8651da53-e976-4395-964b-a5c077d64a26" containerID="7e385ed5618533a49d4b64f4ddce459a1c788e2b930147f485bd20fd79c05d9d" exitCode=0 Feb 28 13:55:07 crc kubenswrapper[4897]: I0228 13:55:07.943022 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" event={"ID":"8651da53-e976-4395-964b-a5c077d64a26","Type":"ContainerDied","Data":"7e385ed5618533a49d4b64f4ddce459a1c788e2b930147f485bd20fd79c05d9d"} Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.408604 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.460448 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory\") pod \"8651da53-e976-4395-964b-a5c077d64a26\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.460535 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z99sd\" (UniqueName: \"kubernetes.io/projected/8651da53-e976-4395-964b-a5c077d64a26-kube-api-access-z99sd\") pod \"8651da53-e976-4395-964b-a5c077d64a26\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.460650 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-ssh-key-openstack-edpm-ipam\") pod \"8651da53-e976-4395-964b-a5c077d64a26\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.472889 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8651da53-e976-4395-964b-a5c077d64a26-kube-api-access-z99sd" (OuterVolumeSpecName: "kube-api-access-z99sd") pod "8651da53-e976-4395-964b-a5c077d64a26" (UID: "8651da53-e976-4395-964b-a5c077d64a26"). InnerVolumeSpecName "kube-api-access-z99sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:55:09 crc kubenswrapper[4897]: E0228 13:55:09.486404 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory podName:8651da53-e976-4395-964b-a5c077d64a26 nodeName:}" failed. No retries permitted until 2026-02-28 13:55:09.986381455 +0000 UTC m=+2324.228702112 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory") pod "8651da53-e976-4395-964b-a5c077d64a26" (UID: "8651da53-e976-4395-964b-a5c077d64a26") : error deleting /var/lib/kubelet/pods/8651da53-e976-4395-964b-a5c077d64a26/volume-subpaths: remove /var/lib/kubelet/pods/8651da53-e976-4395-964b-a5c077d64a26/volume-subpaths: no such file or directory Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.488540 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8651da53-e976-4395-964b-a5c077d64a26" (UID: "8651da53-e976-4395-964b-a5c077d64a26"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.564128 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z99sd\" (UniqueName: \"kubernetes.io/projected/8651da53-e976-4395-964b-a5c077d64a26-kube-api-access-z99sd\") on node \"crc\" DevicePath \"\"" Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.564182 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.972514 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" event={"ID":"8651da53-e976-4395-964b-a5c077d64a26","Type":"ContainerDied","Data":"f99d0e693f2d4d1608bf4f7a3e795bce05743cfe638e65715653d0e025c0cc54"} Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.972568 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-d4ng9" Feb 28 13:55:09 crc kubenswrapper[4897]: I0228 13:55:09.972582 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f99d0e693f2d4d1608bf4f7a3e795bce05743cfe638e65715653d0e025c0cc54" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.074537 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j"] Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.074740 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory\") pod \"8651da53-e976-4395-964b-a5c077d64a26\" (UID: \"8651da53-e976-4395-964b-a5c077d64a26\") " Feb 28 13:55:10 crc kubenswrapper[4897]: E0228 13:55:10.074932 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8651da53-e976-4395-964b-a5c077d64a26" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.074946 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8651da53-e976-4395-964b-a5c077d64a26" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.075126 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8651da53-e976-4395-964b-a5c077d64a26" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.075870 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.078934 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory" (OuterVolumeSpecName: "inventory") pod "8651da53-e976-4395-964b-a5c077d64a26" (UID: "8651da53-e976-4395-964b-a5c077d64a26"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.092691 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j"] Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.176706 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.176767 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.176945 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvfl\" (UniqueName: \"kubernetes.io/projected/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-kube-api-access-vkvfl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.177004 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8651da53-e976-4395-964b-a5c077d64a26-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.278977 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkvfl\" (UniqueName: \"kubernetes.io/projected/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-kube-api-access-vkvfl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.279050 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.279092 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.283668 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.283908 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.303672 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkvfl\" (UniqueName: \"kubernetes.io/projected/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-kube-api-access-vkvfl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xss7j\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.455954 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:55:10 crc kubenswrapper[4897]: I0228 13:55:10.998178 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j"] Feb 28 13:55:11 crc kubenswrapper[4897]: W0228 13:55:11.002517 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3a5c5ba_fd5c_468e_b881_4f8cbc47ff21.slice/crio-a1f4eb16d32290acb19ee73ee83264bec91c1db56af83b45122724837cefbb00 WatchSource:0}: Error finding container a1f4eb16d32290acb19ee73ee83264bec91c1db56af83b45122724837cefbb00: Status 404 returned error can't find the container with id a1f4eb16d32290acb19ee73ee83264bec91c1db56af83b45122724837cefbb00 Feb 28 13:55:11 crc kubenswrapper[4897]: I0228 13:55:11.884389 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nwwf2"] Feb 28 13:55:11 crc kubenswrapper[4897]: I0228 13:55:11.896568 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:11 crc kubenswrapper[4897]: I0228 13:55:11.908838 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwwf2"] Feb 28 13:55:11 crc kubenswrapper[4897]: I0228 13:55:11.991996 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" event={"ID":"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21","Type":"ContainerStarted","Data":"7ad2a572c7675940b792b6ddbe46bb826228ce65e3691ded62afa0a23a4ad899"} Feb 28 13:55:11 crc kubenswrapper[4897]: I0228 13:55:11.992056 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" event={"ID":"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21","Type":"ContainerStarted","Data":"a1f4eb16d32290acb19ee73ee83264bec91c1db56af83b45122724837cefbb00"} Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.015677 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" podStartSLOduration=1.4332655970000001 podStartE2EDuration="2.015650692s" podCreationTimestamp="2026-02-28 13:55:10 +0000 UTC" firstStartedPulling="2026-02-28 13:55:11.004175028 +0000 UTC m=+2325.246495695" lastFinishedPulling="2026-02-28 13:55:11.586560123 +0000 UTC m=+2325.828880790" observedRunningTime="2026-02-28 13:55:12.01013104 +0000 UTC m=+2326.252451717" watchObservedRunningTime="2026-02-28 13:55:12.015650692 +0000 UTC m=+2326.257971359" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.025143 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-utilities\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.025447 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df6b4\" (UniqueName: \"kubernetes.io/projected/98bfe98b-6cd9-47c2-b32e-c3eae119410f-kube-api-access-df6b4\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.025778 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-catalog-content\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.127437 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-catalog-content\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.127639 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-utilities\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.127683 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df6b4\" (UniqueName: \"kubernetes.io/projected/98bfe98b-6cd9-47c2-b32e-c3eae119410f-kube-api-access-df6b4\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.128089 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-catalog-content\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.128466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-utilities\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.151375 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df6b4\" (UniqueName: \"kubernetes.io/projected/98bfe98b-6cd9-47c2-b32e-c3eae119410f-kube-api-access-df6b4\") pod \"certified-operators-nwwf2\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.225373 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.463463 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:55:12 crc kubenswrapper[4897]: E0228 13:55:12.464222 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:55:12 crc kubenswrapper[4897]: I0228 13:55:12.774789 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwwf2"] Feb 28 13:55:12 crc kubenswrapper[4897]: W0228 13:55:12.774869 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98bfe98b_6cd9_47c2_b32e_c3eae119410f.slice/crio-95a94b0a7055042c508c1c51c5dccbfccafa01661adb6c3ff5b927905084971d WatchSource:0}: Error finding container 95a94b0a7055042c508c1c51c5dccbfccafa01661adb6c3ff5b927905084971d: Status 404 returned error can't find the container with id 95a94b0a7055042c508c1c51c5dccbfccafa01661adb6c3ff5b927905084971d Feb 28 13:55:13 crc kubenswrapper[4897]: I0228 13:55:13.000820 4897 generic.go:334] "Generic (PLEG): container finished" podID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerID="bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2" exitCode=0 Feb 28 13:55:13 crc kubenswrapper[4897]: I0228 13:55:13.001003 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwwf2" event={"ID":"98bfe98b-6cd9-47c2-b32e-c3eae119410f","Type":"ContainerDied","Data":"bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2"} Feb 28 13:55:13 crc kubenswrapper[4897]: I0228 13:55:13.001176 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwwf2" event={"ID":"98bfe98b-6cd9-47c2-b32e-c3eae119410f","Type":"ContainerStarted","Data":"95a94b0a7055042c508c1c51c5dccbfccafa01661adb6c3ff5b927905084971d"} Feb 28 13:55:13 crc kubenswrapper[4897]: E0228 13:55:13.734014 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 13:55:13 crc kubenswrapper[4897]: E0228 13:55:13.734517 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df6b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nwwf2_openshift-marketplace(98bfe98b-6cd9-47c2-b32e-c3eae119410f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 13:55:13 crc kubenswrapper[4897]: E0228 13:55:13.735763 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-nwwf2" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" Feb 28 13:55:14 crc kubenswrapper[4897]: E0228 13:55:14.011298 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nwwf2" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" Feb 28 13:55:23 crc kubenswrapper[4897]: I0228 13:55:23.456889 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:55:23 crc kubenswrapper[4897]: E0228 13:55:23.457921 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:55:31 crc kubenswrapper[4897]: I0228 13:55:31.222869 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwwf2" event={"ID":"98bfe98b-6cd9-47c2-b32e-c3eae119410f","Type":"ContainerStarted","Data":"26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611"} Feb 28 13:55:32 crc kubenswrapper[4897]: I0228 13:55:32.238452 4897 generic.go:334] "Generic (PLEG): container finished" podID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerID="26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611" exitCode=0 Feb 28 13:55:32 crc kubenswrapper[4897]: I0228 13:55:32.238507 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwwf2" event={"ID":"98bfe98b-6cd9-47c2-b32e-c3eae119410f","Type":"ContainerDied","Data":"26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611"} Feb 28 13:55:33 crc kubenswrapper[4897]: I0228 13:55:33.251595 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwwf2" event={"ID":"98bfe98b-6cd9-47c2-b32e-c3eae119410f","Type":"ContainerStarted","Data":"34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1"} Feb 28 13:55:33 crc kubenswrapper[4897]: I0228 13:55:33.287173 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nwwf2" podStartSLOduration=2.674706429 podStartE2EDuration="22.287156775s" podCreationTimestamp="2026-02-28 13:55:11 +0000 UTC" firstStartedPulling="2026-02-28 13:55:13.003610998 +0000 UTC m=+2327.245931655" lastFinishedPulling="2026-02-28 13:55:32.616061304 +0000 UTC m=+2346.858382001" observedRunningTime="2026-02-28 13:55:33.28044903 +0000 UTC m=+2347.522769687" watchObservedRunningTime="2026-02-28 13:55:33.287156775 +0000 UTC m=+2347.529477432" Feb 28 13:55:35 crc kubenswrapper[4897]: I0228 13:55:35.457112 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:55:35 crc kubenswrapper[4897]: E0228 13:55:35.457810 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:55:42 crc kubenswrapper[4897]: I0228 13:55:42.226213 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:42 crc kubenswrapper[4897]: I0228 13:55:42.227038 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:42 crc kubenswrapper[4897]: I0228 13:55:42.293568 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:42 crc kubenswrapper[4897]: I0228 13:55:42.412887 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:43 crc kubenswrapper[4897]: I0228 13:55:43.080067 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nwwf2"] Feb 28 13:55:44 crc kubenswrapper[4897]: I0228 13:55:44.382178 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nwwf2" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerName="registry-server" containerID="cri-o://34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1" gracePeriod=2 Feb 28 13:55:44 crc kubenswrapper[4897]: I0228 13:55:44.866720 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.054603 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-catalog-content\") pod \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.054713 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-utilities\") pod \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.054927 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df6b4\" (UniqueName: \"kubernetes.io/projected/98bfe98b-6cd9-47c2-b32e-c3eae119410f-kube-api-access-df6b4\") pod \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\" (UID: \"98bfe98b-6cd9-47c2-b32e-c3eae119410f\") " Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.055616 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-utilities" (OuterVolumeSpecName: "utilities") pod "98bfe98b-6cd9-47c2-b32e-c3eae119410f" (UID: "98bfe98b-6cd9-47c2-b32e-c3eae119410f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.063057 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bfe98b-6cd9-47c2-b32e-c3eae119410f-kube-api-access-df6b4" (OuterVolumeSpecName: "kube-api-access-df6b4") pod "98bfe98b-6cd9-47c2-b32e-c3eae119410f" (UID: "98bfe98b-6cd9-47c2-b32e-c3eae119410f"). InnerVolumeSpecName "kube-api-access-df6b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.121413 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98bfe98b-6cd9-47c2-b32e-c3eae119410f" (UID: "98bfe98b-6cd9-47c2-b32e-c3eae119410f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.157870 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.157901 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df6b4\" (UniqueName: \"kubernetes.io/projected/98bfe98b-6cd9-47c2-b32e-c3eae119410f-kube-api-access-df6b4\") on node \"crc\" DevicePath \"\"" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.157913 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bfe98b-6cd9-47c2-b32e-c3eae119410f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.397511 4897 generic.go:334] "Generic (PLEG): container finished" podID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerID="34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1" exitCode=0 Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.397564 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwwf2" event={"ID":"98bfe98b-6cd9-47c2-b32e-c3eae119410f","Type":"ContainerDied","Data":"34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1"} Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.397607 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwwf2" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.397635 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwwf2" event={"ID":"98bfe98b-6cd9-47c2-b32e-c3eae119410f","Type":"ContainerDied","Data":"95a94b0a7055042c508c1c51c5dccbfccafa01661adb6c3ff5b927905084971d"} Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.397669 4897 scope.go:117] "RemoveContainer" containerID="34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.434782 4897 scope.go:117] "RemoveContainer" containerID="26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.454653 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nwwf2"] Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.471269 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nwwf2"] Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.477521 4897 scope.go:117] "RemoveContainer" containerID="bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.513159 4897 scope.go:117] "RemoveContainer" containerID="34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1" Feb 28 13:55:45 crc kubenswrapper[4897]: E0228 13:55:45.513692 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1\": container with ID starting with 34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1 not found: ID does not exist" containerID="34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.513761 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1"} err="failed to get container status \"34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1\": rpc error: code = NotFound desc = could not find container \"34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1\": container with ID starting with 34eea073d75418420b4a6f44b32eb3f49bf8aab6ce087d26867bffbd869d71a1 not found: ID does not exist" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.513829 4897 scope.go:117] "RemoveContainer" containerID="26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611" Feb 28 13:55:45 crc kubenswrapper[4897]: E0228 13:55:45.514706 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611\": container with ID starting with 26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611 not found: ID does not exist" containerID="26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.514805 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611"} err="failed to get container status \"26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611\": rpc error: code = NotFound desc = could not find container \"26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611\": container with ID starting with 26014a54b25428a688422ba662f963b63cbc5457eeb9e9baa3b2c30d516f8611 not found: ID does not exist" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.514881 4897 scope.go:117] "RemoveContainer" containerID="bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2" Feb 28 13:55:45 crc kubenswrapper[4897]: E0228 13:55:45.515556 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2\": container with ID starting with bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2 not found: ID does not exist" containerID="bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2" Feb 28 13:55:45 crc kubenswrapper[4897]: I0228 13:55:45.515674 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2"} err="failed to get container status \"bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2\": rpc error: code = NotFound desc = could not find container \"bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2\": container with ID starting with bf122e6f16de16eb0ed3d98ac8e463c4d5a0d3238abc03c5124210b129db8ef2 not found: ID does not exist" Feb 28 13:55:46 crc kubenswrapper[4897]: I0228 13:55:46.478756 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" path="/var/lib/kubelet/pods/98bfe98b-6cd9-47c2-b32e-c3eae119410f/volumes" Feb 28 13:55:49 crc kubenswrapper[4897]: I0228 13:55:49.456595 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:55:49 crc kubenswrapper[4897]: E0228 13:55:49.457438 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.171952 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538116-hqwjz"] Feb 28 13:56:00 crc kubenswrapper[4897]: E0228 13:56:00.173657 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerName="registry-server" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.173690 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerName="registry-server" Feb 28 13:56:00 crc kubenswrapper[4897]: E0228 13:56:00.173730 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerName="extract-content" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.173746 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerName="extract-content" Feb 28 13:56:00 crc kubenswrapper[4897]: E0228 13:56:00.173796 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerName="extract-utilities" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.173813 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerName="extract-utilities" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.174239 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bfe98b-6cd9-47c2-b32e-c3eae119410f" containerName="registry-server" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.175786 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538116-hqwjz" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.179532 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.180035 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.180382 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.187174 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538116-hqwjz"] Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.332184 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnx9k\" (UniqueName: \"kubernetes.io/projected/94669ad4-7c91-4a5a-b8d3-3b62b154ce57-kube-api-access-mnx9k\") pod \"auto-csr-approver-29538116-hqwjz\" (UID: \"94669ad4-7c91-4a5a-b8d3-3b62b154ce57\") " pod="openshift-infra/auto-csr-approver-29538116-hqwjz" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.435630 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnx9k\" (UniqueName: \"kubernetes.io/projected/94669ad4-7c91-4a5a-b8d3-3b62b154ce57-kube-api-access-mnx9k\") pod \"auto-csr-approver-29538116-hqwjz\" (UID: \"94669ad4-7c91-4a5a-b8d3-3b62b154ce57\") " pod="openshift-infra/auto-csr-approver-29538116-hqwjz" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.485767 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnx9k\" (UniqueName: \"kubernetes.io/projected/94669ad4-7c91-4a5a-b8d3-3b62b154ce57-kube-api-access-mnx9k\") pod \"auto-csr-approver-29538116-hqwjz\" (UID: \"94669ad4-7c91-4a5a-b8d3-3b62b154ce57\") " pod="openshift-infra/auto-csr-approver-29538116-hqwjz" Feb 28 13:56:00 crc kubenswrapper[4897]: I0228 13:56:00.496424 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538116-hqwjz" Feb 28 13:56:01 crc kubenswrapper[4897]: I0228 13:56:01.029925 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538116-hqwjz"] Feb 28 13:56:01 crc kubenswrapper[4897]: I0228 13:56:01.456436 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:56:01 crc kubenswrapper[4897]: E0228 13:56:01.456890 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:56:01 crc kubenswrapper[4897]: I0228 13:56:01.584187 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538116-hqwjz" event={"ID":"94669ad4-7c91-4a5a-b8d3-3b62b154ce57","Type":"ContainerStarted","Data":"d889046e4423f7526bc4c1dc4da8f7fb708619f722e1b83db9de9982a0236cb6"} Feb 28 13:56:03 crc kubenswrapper[4897]: I0228 13:56:03.605172 4897 generic.go:334] "Generic (PLEG): container finished" podID="94669ad4-7c91-4a5a-b8d3-3b62b154ce57" containerID="cc7a1378c3f5453fe3574630478a94e9fa202a3e65488c7424520bd18fd20234" exitCode=0 Feb 28 13:56:03 crc kubenswrapper[4897]: I0228 13:56:03.605291 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538116-hqwjz" event={"ID":"94669ad4-7c91-4a5a-b8d3-3b62b154ce57","Type":"ContainerDied","Data":"cc7a1378c3f5453fe3574630478a94e9fa202a3e65488c7424520bd18fd20234"} Feb 28 13:56:03 crc kubenswrapper[4897]: I0228 13:56:03.608268 4897 generic.go:334] "Generic (PLEG): container finished" podID="f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21" containerID="7ad2a572c7675940b792b6ddbe46bb826228ce65e3691ded62afa0a23a4ad899" exitCode=0 Feb 28 13:56:03 crc kubenswrapper[4897]: I0228 13:56:03.608301 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" event={"ID":"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21","Type":"ContainerDied","Data":"7ad2a572c7675940b792b6ddbe46bb826228ce65e3691ded62afa0a23a4ad899"} Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.137493 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538116-hqwjz" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.146077 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.259160 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnx9k\" (UniqueName: \"kubernetes.io/projected/94669ad4-7c91-4a5a-b8d3-3b62b154ce57-kube-api-access-mnx9k\") pod \"94669ad4-7c91-4a5a-b8d3-3b62b154ce57\" (UID: \"94669ad4-7c91-4a5a-b8d3-3b62b154ce57\") " Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.259201 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkvfl\" (UniqueName: \"kubernetes.io/projected/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-kube-api-access-vkvfl\") pod \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.259308 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-inventory\") pod \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.259411 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-ssh-key-openstack-edpm-ipam\") pod \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\" (UID: \"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21\") " Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.266537 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94669ad4-7c91-4a5a-b8d3-3b62b154ce57-kube-api-access-mnx9k" (OuterVolumeSpecName: "kube-api-access-mnx9k") pod "94669ad4-7c91-4a5a-b8d3-3b62b154ce57" (UID: "94669ad4-7c91-4a5a-b8d3-3b62b154ce57"). InnerVolumeSpecName "kube-api-access-mnx9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.266595 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-kube-api-access-vkvfl" (OuterVolumeSpecName: "kube-api-access-vkvfl") pod "f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21" (UID: "f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21"). InnerVolumeSpecName "kube-api-access-vkvfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.298179 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-inventory" (OuterVolumeSpecName: "inventory") pod "f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21" (UID: "f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.308364 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21" (UID: "f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.361999 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.362151 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnx9k\" (UniqueName: \"kubernetes.io/projected/94669ad4-7c91-4a5a-b8d3-3b62b154ce57-kube-api-access-mnx9k\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.362226 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkvfl\" (UniqueName: \"kubernetes.io/projected/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-kube-api-access-vkvfl\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.362283 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.678097 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" event={"ID":"f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21","Type":"ContainerDied","Data":"a1f4eb16d32290acb19ee73ee83264bec91c1db56af83b45122724837cefbb00"} Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.678297 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f4eb16d32290acb19ee73ee83264bec91c1db56af83b45122724837cefbb00" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.678421 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xss7j" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.689961 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538116-hqwjz" event={"ID":"94669ad4-7c91-4a5a-b8d3-3b62b154ce57","Type":"ContainerDied","Data":"d889046e4423f7526bc4c1dc4da8f7fb708619f722e1b83db9de9982a0236cb6"} Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.690162 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d889046e4423f7526bc4c1dc4da8f7fb708619f722e1b83db9de9982a0236cb6" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.690292 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538116-hqwjz" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.739943 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vrbrt"] Feb 28 13:56:05 crc kubenswrapper[4897]: E0228 13:56:05.740585 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94669ad4-7c91-4a5a-b8d3-3b62b154ce57" containerName="oc" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.740653 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="94669ad4-7c91-4a5a-b8d3-3b62b154ce57" containerName="oc" Feb 28 13:56:05 crc kubenswrapper[4897]: E0228 13:56:05.740719 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.740771 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.741035 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="94669ad4-7c91-4a5a-b8d3-3b62b154ce57" containerName="oc" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.741121 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.741922 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.743719 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.743765 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.744002 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.744766 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.754891 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vrbrt"] Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.872112 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.872225 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9s52\" (UniqueName: \"kubernetes.io/projected/0a198568-b27e-4e65-bc3f-6b70f3184b6b-kube-api-access-j9s52\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.872465 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.973914 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.974011 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9s52\" (UniqueName: \"kubernetes.io/projected/0a198568-b27e-4e65-bc3f-6b70f3184b6b-kube-api-access-j9s52\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.974170 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.980136 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.981256 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:05 crc kubenswrapper[4897]: I0228 13:56:05.991358 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9s52\" (UniqueName: \"kubernetes.io/projected/0a198568-b27e-4e65-bc3f-6b70f3184b6b-kube-api-access-j9s52\") pod \"ssh-known-hosts-edpm-deployment-vrbrt\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:06 crc kubenswrapper[4897]: I0228 13:56:06.065540 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:06 crc kubenswrapper[4897]: I0228 13:56:06.218054 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538110-4xsfk"] Feb 28 13:56:06 crc kubenswrapper[4897]: I0228 13:56:06.234925 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538110-4xsfk"] Feb 28 13:56:06 crc kubenswrapper[4897]: I0228 13:56:06.470787 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ea4b91-46bf-4b38-a2e1-7370d12072ca" path="/var/lib/kubelet/pods/e4ea4b91-46bf-4b38-a2e1-7370d12072ca/volumes" Feb 28 13:56:06 crc kubenswrapper[4897]: I0228 13:56:06.697031 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vrbrt"] Feb 28 13:56:07 crc kubenswrapper[4897]: I0228 13:56:07.713679 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" event={"ID":"0a198568-b27e-4e65-bc3f-6b70f3184b6b","Type":"ContainerStarted","Data":"7f11fe30281c3569ebdb14e711fb326ad16d8a02736cfdd91f089fa064e87aa4"} Feb 28 13:56:07 crc kubenswrapper[4897]: I0228 13:56:07.714431 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" event={"ID":"0a198568-b27e-4e65-bc3f-6b70f3184b6b","Type":"ContainerStarted","Data":"7d10c1fa83a6868ad2e7f30d45c5b71942f64999498f896b850ab339a09a12ad"} Feb 28 13:56:07 crc kubenswrapper[4897]: I0228 13:56:07.744994 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" podStartSLOduration=2.356768291 podStartE2EDuration="2.744965582s" podCreationTimestamp="2026-02-28 13:56:05 +0000 UTC" firstStartedPulling="2026-02-28 13:56:06.701717252 +0000 UTC m=+2380.944037919" lastFinishedPulling="2026-02-28 13:56:07.089914513 +0000 UTC m=+2381.332235210" observedRunningTime="2026-02-28 13:56:07.736254162 +0000 UTC m=+2381.978574839" watchObservedRunningTime="2026-02-28 13:56:07.744965582 +0000 UTC m=+2381.987286279" Feb 28 13:56:14 crc kubenswrapper[4897]: I0228 13:56:14.788884 4897 generic.go:334] "Generic (PLEG): container finished" podID="0a198568-b27e-4e65-bc3f-6b70f3184b6b" containerID="7f11fe30281c3569ebdb14e711fb326ad16d8a02736cfdd91f089fa064e87aa4" exitCode=0 Feb 28 13:56:14 crc kubenswrapper[4897]: I0228 13:56:14.789003 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" event={"ID":"0a198568-b27e-4e65-bc3f-6b70f3184b6b","Type":"ContainerDied","Data":"7f11fe30281c3569ebdb14e711fb326ad16d8a02736cfdd91f089fa064e87aa4"} Feb 28 13:56:15 crc kubenswrapper[4897]: I0228 13:56:15.456724 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:56:15 crc kubenswrapper[4897]: E0228 13:56:15.457112 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.177507 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.221212 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-ssh-key-openstack-edpm-ipam\") pod \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.221322 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-inventory-0\") pod \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.221434 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9s52\" (UniqueName: \"kubernetes.io/projected/0a198568-b27e-4e65-bc3f-6b70f3184b6b-kube-api-access-j9s52\") pod \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\" (UID: \"0a198568-b27e-4e65-bc3f-6b70f3184b6b\") " Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.230081 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a198568-b27e-4e65-bc3f-6b70f3184b6b-kube-api-access-j9s52" (OuterVolumeSpecName: "kube-api-access-j9s52") pod "0a198568-b27e-4e65-bc3f-6b70f3184b6b" (UID: "0a198568-b27e-4e65-bc3f-6b70f3184b6b"). InnerVolumeSpecName "kube-api-access-j9s52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.256379 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0a198568-b27e-4e65-bc3f-6b70f3184b6b" (UID: "0a198568-b27e-4e65-bc3f-6b70f3184b6b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.282124 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a198568-b27e-4e65-bc3f-6b70f3184b6b" (UID: "0a198568-b27e-4e65-bc3f-6b70f3184b6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.324328 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.324367 4897 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a198568-b27e-4e65-bc3f-6b70f3184b6b-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.324380 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9s52\" (UniqueName: \"kubernetes.io/projected/0a198568-b27e-4e65-bc3f-6b70f3184b6b-kube-api-access-j9s52\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.810644 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" event={"ID":"0a198568-b27e-4e65-bc3f-6b70f3184b6b","Type":"ContainerDied","Data":"7d10c1fa83a6868ad2e7f30d45c5b71942f64999498f896b850ab339a09a12ad"} Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.810688 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d10c1fa83a6868ad2e7f30d45c5b71942f64999498f896b850ab339a09a12ad" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.810784 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vrbrt" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.907950 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs"] Feb 28 13:56:16 crc kubenswrapper[4897]: E0228 13:56:16.908988 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a198568-b27e-4e65-bc3f-6b70f3184b6b" containerName="ssh-known-hosts-edpm-deployment" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.909017 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a198568-b27e-4e65-bc3f-6b70f3184b6b" containerName="ssh-known-hosts-edpm-deployment" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.909421 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a198568-b27e-4e65-bc3f-6b70f3184b6b" containerName="ssh-known-hosts-edpm-deployment" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.910587 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.912900 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.912955 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.913644 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.913727 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.925611 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs"] Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.938610 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jj4v\" (UniqueName: \"kubernetes.io/projected/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-kube-api-access-7jj4v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.938700 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:16 crc kubenswrapper[4897]: I0228 13:56:16.939411 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:17 crc kubenswrapper[4897]: I0228 13:56:17.041875 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:17 crc kubenswrapper[4897]: I0228 13:56:17.042023 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jj4v\" (UniqueName: \"kubernetes.io/projected/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-kube-api-access-7jj4v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:17 crc kubenswrapper[4897]: I0228 13:56:17.042056 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:17 crc kubenswrapper[4897]: I0228 13:56:17.047497 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:17 crc kubenswrapper[4897]: I0228 13:56:17.047619 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:17 crc kubenswrapper[4897]: I0228 13:56:17.073878 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jj4v\" (UniqueName: \"kubernetes.io/projected/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-kube-api-access-7jj4v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ccmbs\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:17 crc kubenswrapper[4897]: I0228 13:56:17.246108 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:17 crc kubenswrapper[4897]: I0228 13:56:17.871625 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs"] Feb 28 13:56:18 crc kubenswrapper[4897]: I0228 13:56:18.833055 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" event={"ID":"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3","Type":"ContainerStarted","Data":"348bc81875e82724314719fdb114fc171a8c7fd17192d28780a3ece9d13df9a5"} Feb 28 13:56:18 crc kubenswrapper[4897]: I0228 13:56:18.835373 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" event={"ID":"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3","Type":"ContainerStarted","Data":"c7d5b2d70513b6db15dd0cb697928c93b843a264a324fb0eabb288f4e1f465a6"} Feb 28 13:56:18 crc kubenswrapper[4897]: I0228 13:56:18.857358 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" podStartSLOduration=2.40348779 podStartE2EDuration="2.857341902s" podCreationTimestamp="2026-02-28 13:56:16 +0000 UTC" firstStartedPulling="2026-02-28 13:56:17.874538108 +0000 UTC m=+2392.116858785" lastFinishedPulling="2026-02-28 13:56:18.3283922 +0000 UTC m=+2392.570712897" observedRunningTime="2026-02-28 13:56:18.852581291 +0000 UTC m=+2393.094901948" watchObservedRunningTime="2026-02-28 13:56:18.857341902 +0000 UTC m=+2393.099662559" Feb 28 13:56:26 crc kubenswrapper[4897]: I0228 13:56:26.461905 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:56:26 crc kubenswrapper[4897]: E0228 13:56:26.462771 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:56:26 crc kubenswrapper[4897]: I0228 13:56:26.930383 4897 generic.go:334] "Generic (PLEG): container finished" podID="bfdbf8bc-0180-406e-884b-cfd88b6ae1a3" containerID="348bc81875e82724314719fdb114fc171a8c7fd17192d28780a3ece9d13df9a5" exitCode=0 Feb 28 13:56:26 crc kubenswrapper[4897]: I0228 13:56:26.930517 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" event={"ID":"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3","Type":"ContainerDied","Data":"348bc81875e82724314719fdb114fc171a8c7fd17192d28780a3ece9d13df9a5"} Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.409684 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.512573 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jj4v\" (UniqueName: \"kubernetes.io/projected/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-kube-api-access-7jj4v\") pod \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.512733 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-inventory\") pod \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.512778 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-ssh-key-openstack-edpm-ipam\") pod \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\" (UID: \"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3\") " Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.518072 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-kube-api-access-7jj4v" (OuterVolumeSpecName: "kube-api-access-7jj4v") pod "bfdbf8bc-0180-406e-884b-cfd88b6ae1a3" (UID: "bfdbf8bc-0180-406e-884b-cfd88b6ae1a3"). InnerVolumeSpecName "kube-api-access-7jj4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.542452 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-inventory" (OuterVolumeSpecName: "inventory") pod "bfdbf8bc-0180-406e-884b-cfd88b6ae1a3" (UID: "bfdbf8bc-0180-406e-884b-cfd88b6ae1a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.544570 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bfdbf8bc-0180-406e-884b-cfd88b6ae1a3" (UID: "bfdbf8bc-0180-406e-884b-cfd88b6ae1a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.615010 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jj4v\" (UniqueName: \"kubernetes.io/projected/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-kube-api-access-7jj4v\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.615041 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.615054 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfdbf8bc-0180-406e-884b-cfd88b6ae1a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.953903 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" event={"ID":"bfdbf8bc-0180-406e-884b-cfd88b6ae1a3","Type":"ContainerDied","Data":"c7d5b2d70513b6db15dd0cb697928c93b843a264a324fb0eabb288f4e1f465a6"} Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.953944 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d5b2d70513b6db15dd0cb697928c93b843a264a324fb0eabb288f4e1f465a6" Feb 28 13:56:28 crc kubenswrapper[4897]: I0228 13:56:28.953962 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ccmbs" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.071052 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9"] Feb 28 13:56:29 crc kubenswrapper[4897]: E0228 13:56:29.071461 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdbf8bc-0180-406e-884b-cfd88b6ae1a3" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.071478 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdbf8bc-0180-406e-884b-cfd88b6ae1a3" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.071673 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfdbf8bc-0180-406e-884b-cfd88b6ae1a3" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.072372 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.074778 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.075771 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.076410 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.076655 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.088446 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9"] Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.229510 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.230003 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.230149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxc2\" (UniqueName: \"kubernetes.io/projected/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-kube-api-access-7cxc2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.332312 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.332376 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.332444 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cxc2\" (UniqueName: \"kubernetes.io/projected/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-kube-api-access-7cxc2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.338456 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.339034 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.360567 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cxc2\" (UniqueName: \"kubernetes.io/projected/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-kube-api-access-7cxc2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:29 crc kubenswrapper[4897]: I0228 13:56:29.398018 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:30 crc kubenswrapper[4897]: I0228 13:56:30.139417 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9"] Feb 28 13:56:30 crc kubenswrapper[4897]: I0228 13:56:30.149263 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 13:56:30 crc kubenswrapper[4897]: I0228 13:56:30.975337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" event={"ID":"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa","Type":"ContainerStarted","Data":"ed12574670d23b72dc1dad3ad60c053bcbbb35715622a3592890d1186545f507"} Feb 28 13:56:30 crc kubenswrapper[4897]: I0228 13:56:30.975587 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" event={"ID":"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa","Type":"ContainerStarted","Data":"59f173207ccc4bbb920681e62cb2ec8022704ff5e51ea34a609962a5a456f763"} Feb 28 13:56:30 crc kubenswrapper[4897]: I0228 13:56:30.993015 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" podStartSLOduration=1.596453109 podStartE2EDuration="1.993000431s" podCreationTimestamp="2026-02-28 13:56:29 +0000 UTC" firstStartedPulling="2026-02-28 13:56:30.148920201 +0000 UTC m=+2404.391240868" lastFinishedPulling="2026-02-28 13:56:30.545467533 +0000 UTC m=+2404.787788190" observedRunningTime="2026-02-28 13:56:30.99006985 +0000 UTC m=+2405.232390527" watchObservedRunningTime="2026-02-28 13:56:30.993000431 +0000 UTC m=+2405.235321088" Feb 28 13:56:37 crc kubenswrapper[4897]: I0228 13:56:37.456523 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:56:37 crc kubenswrapper[4897]: E0228 13:56:37.457869 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:56:41 crc kubenswrapper[4897]: I0228 13:56:41.083095 4897 generic.go:334] "Generic (PLEG): container finished" podID="0b6d041b-3a22-45fa-bd9e-33dea9dc98aa" containerID="ed12574670d23b72dc1dad3ad60c053bcbbb35715622a3592890d1186545f507" exitCode=0 Feb 28 13:56:41 crc kubenswrapper[4897]: I0228 13:56:41.083191 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" event={"ID":"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa","Type":"ContainerDied","Data":"ed12574670d23b72dc1dad3ad60c053bcbbb35715622a3592890d1186545f507"} Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.619251 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.725063 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-inventory\") pod \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.725218 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-ssh-key-openstack-edpm-ipam\") pod \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.725367 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cxc2\" (UniqueName: \"kubernetes.io/projected/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-kube-api-access-7cxc2\") pod \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\" (UID: \"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa\") " Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.732692 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-kube-api-access-7cxc2" (OuterVolumeSpecName: "kube-api-access-7cxc2") pod "0b6d041b-3a22-45fa-bd9e-33dea9dc98aa" (UID: "0b6d041b-3a22-45fa-bd9e-33dea9dc98aa"). InnerVolumeSpecName "kube-api-access-7cxc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.752143 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b6d041b-3a22-45fa-bd9e-33dea9dc98aa" (UID: "0b6d041b-3a22-45fa-bd9e-33dea9dc98aa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.772749 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-inventory" (OuterVolumeSpecName: "inventory") pod "0b6d041b-3a22-45fa-bd9e-33dea9dc98aa" (UID: "0b6d041b-3a22-45fa-bd9e-33dea9dc98aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.828443 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.828494 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:42 crc kubenswrapper[4897]: I0228 13:56:42.828518 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cxc2\" (UniqueName: \"kubernetes.io/projected/0b6d041b-3a22-45fa-bd9e-33dea9dc98aa-kube-api-access-7cxc2\") on node \"crc\" DevicePath \"\"" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.111869 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" event={"ID":"0b6d041b-3a22-45fa-bd9e-33dea9dc98aa","Type":"ContainerDied","Data":"59f173207ccc4bbb920681e62cb2ec8022704ff5e51ea34a609962a5a456f763"} Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.112372 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59f173207ccc4bbb920681e62cb2ec8022704ff5e51ea34a609962a5a456f763" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.111959 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.260631 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz"] Feb 28 13:56:43 crc kubenswrapper[4897]: E0228 13:56:43.261144 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6d041b-3a22-45fa-bd9e-33dea9dc98aa" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.261174 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6d041b-3a22-45fa-bd9e-33dea9dc98aa" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.261465 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6d041b-3a22-45fa-bd9e-33dea9dc98aa" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.262353 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.269926 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.270170 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.269991 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.270038 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.270083 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.270553 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.270902 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.271144 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.272131 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz"] Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439356 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439451 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439490 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439517 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439605 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnhh\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-kube-api-access-svnhh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439634 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439665 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439699 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439783 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439830 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439872 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439923 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.439979 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.440105 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.541977 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svnhh\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-kube-api-access-svnhh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.542595 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.542684 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.542999 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543053 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543090 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543117 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543209 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543265 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543415 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543560 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543679 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543764 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.543791 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.549184 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.550228 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.550768 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.550925 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.551708 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.552416 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.554464 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.554749 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.555760 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.557075 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.558390 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.558474 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.558869 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.560595 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnhh\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-kube-api-access-svnhh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:43 crc kubenswrapper[4897]: I0228 13:56:43.593780 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:56:44 crc kubenswrapper[4897]: I0228 13:56:44.022180 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz"] Feb 28 13:56:44 crc kubenswrapper[4897]: I0228 13:56:44.125518 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" event={"ID":"fdc8cc43-763f-4d3e-8630-a811a93a4157","Type":"ContainerStarted","Data":"04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51"} Feb 28 13:56:45 crc kubenswrapper[4897]: I0228 13:56:45.141335 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" event={"ID":"fdc8cc43-763f-4d3e-8630-a811a93a4157","Type":"ContainerStarted","Data":"4a3ddb665ce3bb4a35cd3cf43e0e05f319b40c96b46d5b29b54c496ddf17c62a"} Feb 28 13:56:45 crc kubenswrapper[4897]: I0228 13:56:45.182114 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" podStartSLOduration=1.7760769170000001 podStartE2EDuration="2.182087839s" podCreationTimestamp="2026-02-28 13:56:43 +0000 UTC" firstStartedPulling="2026-02-28 13:56:44.0249547 +0000 UTC m=+2418.267275367" lastFinishedPulling="2026-02-28 13:56:44.430965602 +0000 UTC m=+2418.673286289" observedRunningTime="2026-02-28 13:56:45.17376904 +0000 UTC m=+2419.416089767" watchObservedRunningTime="2026-02-28 13:56:45.182087839 +0000 UTC m=+2419.424408536" Feb 28 13:56:46 crc kubenswrapper[4897]: I0228 13:56:46.337444 4897 scope.go:117] "RemoveContainer" containerID="bf5deaf29ca942d71c86128e4091cb6b0a1aadcce2842f4c1233af575b9b2323" Feb 28 13:56:49 crc kubenswrapper[4897]: I0228 13:56:49.456766 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:56:49 crc kubenswrapper[4897]: E0228 13:56:49.457519 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:57:01 crc kubenswrapper[4897]: I0228 13:57:01.457175 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:57:01 crc kubenswrapper[4897]: E0228 13:57:01.458265 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:57:13 crc kubenswrapper[4897]: I0228 13:57:13.457220 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:57:13 crc kubenswrapper[4897]: E0228 13:57:13.466863 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:57:24 crc kubenswrapper[4897]: I0228 13:57:24.622948 4897 generic.go:334] "Generic (PLEG): container finished" podID="fdc8cc43-763f-4d3e-8630-a811a93a4157" containerID="4a3ddb665ce3bb4a35cd3cf43e0e05f319b40c96b46d5b29b54c496ddf17c62a" exitCode=0 Feb 28 13:57:24 crc kubenswrapper[4897]: I0228 13:57:24.623120 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" event={"ID":"fdc8cc43-763f-4d3e-8630-a811a93a4157","Type":"ContainerDied","Data":"4a3ddb665ce3bb4a35cd3cf43e0e05f319b40c96b46d5b29b54c496ddf17c62a"} Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.140940 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.328619 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-nova-combined-ca-bundle\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.328694 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.328756 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-telemetry-combined-ca-bundle\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.328777 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-repo-setup-combined-ca-bundle\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.329576 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svnhh\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-kube-api-access-svnhh\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.329652 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-bootstrap-combined-ca-bundle\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.329702 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-inventory\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.329813 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.329854 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ssh-key-openstack-edpm-ipam\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.329956 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-libvirt-combined-ca-bundle\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.330023 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.330054 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-neutron-metadata-combined-ca-bundle\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.330089 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ovn-combined-ca-bundle\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.330120 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fdc8cc43-763f-4d3e-8630-a811a93a4157\" (UID: \"fdc8cc43-763f-4d3e-8630-a811a93a4157\") " Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.334496 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.335211 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.335338 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.335982 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.336893 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.338238 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-kube-api-access-svnhh" (OuterVolumeSpecName: "kube-api-access-svnhh") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "kube-api-access-svnhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.339126 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.339368 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.340246 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.341466 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.343115 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.347137 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.383192 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-inventory" (OuterVolumeSpecName: "inventory") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.385154 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fdc8cc43-763f-4d3e-8630-a811a93a4157" (UID: "fdc8cc43-763f-4d3e-8630-a811a93a4157"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433478 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433524 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433541 4897 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433556 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433569 4897 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433585 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433602 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433622 4897 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433642 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433661 4897 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433677 4897 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433691 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svnhh\" (UniqueName: \"kubernetes.io/projected/fdc8cc43-763f-4d3e-8630-a811a93a4157-kube-api-access-svnhh\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433704 4897 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.433716 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fdc8cc43-763f-4d3e-8630-a811a93a4157-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.467988 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:57:26 crc kubenswrapper[4897]: E0228 13:57:26.468570 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.646791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" event={"ID":"fdc8cc43-763f-4d3e-8630-a811a93a4157","Type":"ContainerDied","Data":"04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51"} Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.646837 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.646853 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.863949 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8"] Feb 28 13:57:26 crc kubenswrapper[4897]: E0228 13:57:26.864730 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc8cc43-763f-4d3e-8630-a811a93a4157" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.864755 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc8cc43-763f-4d3e-8630-a811a93a4157" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.864941 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc8cc43-763f-4d3e-8630-a811a93a4157" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.865645 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.868344 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.869283 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.869536 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.869609 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.869711 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:57:26 crc kubenswrapper[4897]: I0228 13:57:26.884850 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8"] Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.045493 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.046299 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.046375 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.046474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfh2h\" (UniqueName: \"kubernetes.io/projected/ccec52af-4ae3-42de-bead-6b28a6e8c739-kube-api-access-xfh2h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.046502 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.147961 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.149336 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.149648 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.149509 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.149982 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfh2h\" (UniqueName: \"kubernetes.io/projected/ccec52af-4ae3-42de-bead-6b28a6e8c739-kube-api-access-xfh2h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.150107 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.167688 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.167818 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.167857 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.173435 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfh2h\" (UniqueName: \"kubernetes.io/projected/ccec52af-4ae3-42de-bead-6b28a6e8c739-kube-api-access-xfh2h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vskr8\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.186477 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:57:27 crc kubenswrapper[4897]: E0228 13:57:27.632500 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice/crio-04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice\": RecentStats: unable to find data in memory cache]" Feb 28 13:57:27 crc kubenswrapper[4897]: I0228 13:57:27.764990 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8"] Feb 28 13:57:27 crc kubenswrapper[4897]: W0228 13:57:27.769445 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccec52af_4ae3_42de_bead_6b28a6e8c739.slice/crio-a63ebca72c91a858eafdcdd3458d85b46657e1867891859a4c2b594299badde5 WatchSource:0}: Error finding container a63ebca72c91a858eafdcdd3458d85b46657e1867891859a4c2b594299badde5: Status 404 returned error can't find the container with id a63ebca72c91a858eafdcdd3458d85b46657e1867891859a4c2b594299badde5 Feb 28 13:57:28 crc kubenswrapper[4897]: I0228 13:57:28.663548 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" event={"ID":"ccec52af-4ae3-42de-bead-6b28a6e8c739","Type":"ContainerStarted","Data":"5e4f19263c8e891626e97f148cfdbc445db855ae9929ea5989a9fc06ce7dae04"} Feb 28 13:57:28 crc kubenswrapper[4897]: I0228 13:57:28.664062 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" event={"ID":"ccec52af-4ae3-42de-bead-6b28a6e8c739","Type":"ContainerStarted","Data":"a63ebca72c91a858eafdcdd3458d85b46657e1867891859a4c2b594299badde5"} Feb 28 13:57:28 crc kubenswrapper[4897]: I0228 13:57:28.682450 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" podStartSLOduration=2.285601108 podStartE2EDuration="2.682429162s" podCreationTimestamp="2026-02-28 13:57:26 +0000 UTC" firstStartedPulling="2026-02-28 13:57:27.770986057 +0000 UTC m=+2462.013306714" lastFinishedPulling="2026-02-28 13:57:28.167814081 +0000 UTC m=+2462.410134768" observedRunningTime="2026-02-28 13:57:28.681229769 +0000 UTC m=+2462.923550436" watchObservedRunningTime="2026-02-28 13:57:28.682429162 +0000 UTC m=+2462.924749819" Feb 28 13:57:37 crc kubenswrapper[4897]: I0228 13:57:37.456664 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:57:37 crc kubenswrapper[4897]: E0228 13:57:37.458011 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:57:37 crc kubenswrapper[4897]: E0228 13:57:37.902568 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice/crio-04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51\": RecentStats: unable to find data in memory cache]" Feb 28 13:57:48 crc kubenswrapper[4897]: E0228 13:57:48.207520 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice/crio-04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51\": RecentStats: unable to find data in memory cache]" Feb 28 13:57:50 crc kubenswrapper[4897]: I0228 13:57:50.460621 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:57:50 crc kubenswrapper[4897]: E0228 13:57:50.461143 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:57:58 crc kubenswrapper[4897]: E0228 13:57:58.534275 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice/crio-04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice\": RecentStats: unable to find data in memory cache]" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.165208 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538118-p7rgq"] Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.166901 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.169204 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.169248 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.169635 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.174415 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538118-p7rgq"] Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.254534 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852xf\" (UniqueName: \"kubernetes.io/projected/4fe6b94f-0c7a-4e12-8e80-36f817c2063b-kube-api-access-852xf\") pod \"auto-csr-approver-29538118-p7rgq\" (UID: \"4fe6b94f-0c7a-4e12-8e80-36f817c2063b\") " pod="openshift-infra/auto-csr-approver-29538118-p7rgq" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.355914 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-852xf\" (UniqueName: \"kubernetes.io/projected/4fe6b94f-0c7a-4e12-8e80-36f817c2063b-kube-api-access-852xf\") pod \"auto-csr-approver-29538118-p7rgq\" (UID: \"4fe6b94f-0c7a-4e12-8e80-36f817c2063b\") " pod="openshift-infra/auto-csr-approver-29538118-p7rgq" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.385703 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-852xf\" (UniqueName: \"kubernetes.io/projected/4fe6b94f-0c7a-4e12-8e80-36f817c2063b-kube-api-access-852xf\") pod \"auto-csr-approver-29538118-p7rgq\" (UID: \"4fe6b94f-0c7a-4e12-8e80-36f817c2063b\") " pod="openshift-infra/auto-csr-approver-29538118-p7rgq" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.524262 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" Feb 28 13:58:00 crc kubenswrapper[4897]: I0228 13:58:00.993189 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538118-p7rgq"] Feb 28 13:58:01 crc kubenswrapper[4897]: I0228 13:58:01.049244 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" event={"ID":"4fe6b94f-0c7a-4e12-8e80-36f817c2063b","Type":"ContainerStarted","Data":"d740b06266f89695d145cc93a1d722905fb8711bdb5e3ac24fdf7f0b8196a98a"} Feb 28 13:58:01 crc kubenswrapper[4897]: E0228 13:58:01.863729 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 13:58:01 crc kubenswrapper[4897]: E0228 13:58:01.863943 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 13:58:01 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 13:58:01 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-852xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538118-p7rgq_openshift-infra(4fe6b94f-0c7a-4e12-8e80-36f817c2063b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 13:58:01 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 13:58:01 crc kubenswrapper[4897]: E0228 13:58:01.865390 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" podUID="4fe6b94f-0c7a-4e12-8e80-36f817c2063b" Feb 28 13:58:02 crc kubenswrapper[4897]: E0228 13:58:02.066189 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" podUID="4fe6b94f-0c7a-4e12-8e80-36f817c2063b" Feb 28 13:58:03 crc kubenswrapper[4897]: I0228 13:58:03.458424 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:58:03 crc kubenswrapper[4897]: E0228 13:58:03.458868 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:58:08 crc kubenswrapper[4897]: E0228 13:58:08.859762 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice/crio-04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51\": RecentStats: unable to find data in memory cache]" Feb 28 13:58:17 crc kubenswrapper[4897]: I0228 13:58:17.255216 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" event={"ID":"4fe6b94f-0c7a-4e12-8e80-36f817c2063b","Type":"ContainerStarted","Data":"bbfd4c048c57b7c8a87ca20f062dc93c7e3959adcac959b20578b1df4cb9b8ff"} Feb 28 13:58:17 crc kubenswrapper[4897]: I0228 13:58:17.277167 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" podStartSLOduration=1.610092141 podStartE2EDuration="17.277149659s" podCreationTimestamp="2026-02-28 13:58:00 +0000 UTC" firstStartedPulling="2026-02-28 13:58:01.002335095 +0000 UTC m=+2495.244655792" lastFinishedPulling="2026-02-28 13:58:16.669392643 +0000 UTC m=+2510.911713310" observedRunningTime="2026-02-28 13:58:17.271602095 +0000 UTC m=+2511.513922752" watchObservedRunningTime="2026-02-28 13:58:17.277149659 +0000 UTC m=+2511.519470316" Feb 28 13:58:18 crc kubenswrapper[4897]: I0228 13:58:18.269853 4897 generic.go:334] "Generic (PLEG): container finished" podID="4fe6b94f-0c7a-4e12-8e80-36f817c2063b" containerID="bbfd4c048c57b7c8a87ca20f062dc93c7e3959adcac959b20578b1df4cb9b8ff" exitCode=0 Feb 28 13:58:18 crc kubenswrapper[4897]: I0228 13:58:18.269947 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" event={"ID":"4fe6b94f-0c7a-4e12-8e80-36f817c2063b","Type":"ContainerDied","Data":"bbfd4c048c57b7c8a87ca20f062dc93c7e3959adcac959b20578b1df4cb9b8ff"} Feb 28 13:58:18 crc kubenswrapper[4897]: I0228 13:58:18.456399 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:58:18 crc kubenswrapper[4897]: E0228 13:58:18.457233 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:58:19 crc kubenswrapper[4897]: E0228 13:58:19.121295 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice/crio-04eb0a0f714bb7ee1852806367759ffd3d771acb395ce3aebcf2781c8e80bb51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc8cc43_763f_4d3e_8630_a811a93a4157.slice\": RecentStats: unable to find data in memory cache]" Feb 28 13:58:19 crc kubenswrapper[4897]: I0228 13:58:19.644720 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" Feb 28 13:58:19 crc kubenswrapper[4897]: I0228 13:58:19.705730 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-852xf\" (UniqueName: \"kubernetes.io/projected/4fe6b94f-0c7a-4e12-8e80-36f817c2063b-kube-api-access-852xf\") pod \"4fe6b94f-0c7a-4e12-8e80-36f817c2063b\" (UID: \"4fe6b94f-0c7a-4e12-8e80-36f817c2063b\") " Feb 28 13:58:19 crc kubenswrapper[4897]: I0228 13:58:19.715098 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe6b94f-0c7a-4e12-8e80-36f817c2063b-kube-api-access-852xf" (OuterVolumeSpecName: "kube-api-access-852xf") pod "4fe6b94f-0c7a-4e12-8e80-36f817c2063b" (UID: "4fe6b94f-0c7a-4e12-8e80-36f817c2063b"). InnerVolumeSpecName "kube-api-access-852xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:58:19 crc kubenswrapper[4897]: I0228 13:58:19.808130 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-852xf\" (UniqueName: \"kubernetes.io/projected/4fe6b94f-0c7a-4e12-8e80-36f817c2063b-kube-api-access-852xf\") on node \"crc\" DevicePath \"\"" Feb 28 13:58:20 crc kubenswrapper[4897]: I0228 13:58:20.296851 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" event={"ID":"4fe6b94f-0c7a-4e12-8e80-36f817c2063b","Type":"ContainerDied","Data":"d740b06266f89695d145cc93a1d722905fb8711bdb5e3ac24fdf7f0b8196a98a"} Feb 28 13:58:20 crc kubenswrapper[4897]: I0228 13:58:20.297268 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d740b06266f89695d145cc93a1d722905fb8711bdb5e3ac24fdf7f0b8196a98a" Feb 28 13:58:20 crc kubenswrapper[4897]: I0228 13:58:20.297014 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538118-p7rgq" Feb 28 13:58:20 crc kubenswrapper[4897]: I0228 13:58:20.357552 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538112-f95dp"] Feb 28 13:58:20 crc kubenswrapper[4897]: I0228 13:58:20.368355 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538112-f95dp"] Feb 28 13:58:20 crc kubenswrapper[4897]: I0228 13:58:20.473266 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22a79300-52db-4f33-b565-125005a95021" path="/var/lib/kubelet/pods/22a79300-52db-4f33-b565-125005a95021/volumes" Feb 28 13:58:30 crc kubenswrapper[4897]: I0228 13:58:30.457406 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:58:30 crc kubenswrapper[4897]: E0228 13:58:30.458527 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:58:38 crc kubenswrapper[4897]: I0228 13:58:38.521623 4897 generic.go:334] "Generic (PLEG): container finished" podID="ccec52af-4ae3-42de-bead-6b28a6e8c739" containerID="5e4f19263c8e891626e97f148cfdbc445db855ae9929ea5989a9fc06ce7dae04" exitCode=0 Feb 28 13:58:38 crc kubenswrapper[4897]: I0228 13:58:38.521735 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" event={"ID":"ccec52af-4ae3-42de-bead-6b28a6e8c739","Type":"ContainerDied","Data":"5e4f19263c8e891626e97f148cfdbc445db855ae9929ea5989a9fc06ce7dae04"} Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.012020 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.040846 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfh2h\" (UniqueName: \"kubernetes.io/projected/ccec52af-4ae3-42de-bead-6b28a6e8c739-kube-api-access-xfh2h\") pod \"ccec52af-4ae3-42de-bead-6b28a6e8c739\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.040952 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovncontroller-config-0\") pod \"ccec52af-4ae3-42de-bead-6b28a6e8c739\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.041001 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ssh-key-openstack-edpm-ipam\") pod \"ccec52af-4ae3-42de-bead-6b28a6e8c739\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.041065 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-inventory\") pod \"ccec52af-4ae3-42de-bead-6b28a6e8c739\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.041130 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovn-combined-ca-bundle\") pod \"ccec52af-4ae3-42de-bead-6b28a6e8c739\" (UID: \"ccec52af-4ae3-42de-bead-6b28a6e8c739\") " Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.048798 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccec52af-4ae3-42de-bead-6b28a6e8c739-kube-api-access-xfh2h" (OuterVolumeSpecName: "kube-api-access-xfh2h") pod "ccec52af-4ae3-42de-bead-6b28a6e8c739" (UID: "ccec52af-4ae3-42de-bead-6b28a6e8c739"). InnerVolumeSpecName "kube-api-access-xfh2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.050858 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ccec52af-4ae3-42de-bead-6b28a6e8c739" (UID: "ccec52af-4ae3-42de-bead-6b28a6e8c739"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.083390 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ccec52af-4ae3-42de-bead-6b28a6e8c739" (UID: "ccec52af-4ae3-42de-bead-6b28a6e8c739"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.085457 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-inventory" (OuterVolumeSpecName: "inventory") pod "ccec52af-4ae3-42de-bead-6b28a6e8c739" (UID: "ccec52af-4ae3-42de-bead-6b28a6e8c739"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.091107 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ccec52af-4ae3-42de-bead-6b28a6e8c739" (UID: "ccec52af-4ae3-42de-bead-6b28a6e8c739"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.146557 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfh2h\" (UniqueName: \"kubernetes.io/projected/ccec52af-4ae3-42de-bead-6b28a6e8c739-kube-api-access-xfh2h\") on node \"crc\" DevicePath \"\"" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.146582 4897 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.146591 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.146600 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.146624 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccec52af-4ae3-42de-bead-6b28a6e8c739-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.547744 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" event={"ID":"ccec52af-4ae3-42de-bead-6b28a6e8c739","Type":"ContainerDied","Data":"a63ebca72c91a858eafdcdd3458d85b46657e1867891859a4c2b594299badde5"} Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.547803 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63ebca72c91a858eafdcdd3458d85b46657e1867891859a4c2b594299badde5" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.547886 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vskr8" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.696103 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm"] Feb 28 13:58:40 crc kubenswrapper[4897]: E0228 13:58:40.704775 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccec52af-4ae3-42de-bead-6b28a6e8c739" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.704813 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccec52af-4ae3-42de-bead-6b28a6e8c739" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 13:58:40 crc kubenswrapper[4897]: E0228 13:58:40.704834 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe6b94f-0c7a-4e12-8e80-36f817c2063b" containerName="oc" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.704842 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe6b94f-0c7a-4e12-8e80-36f817c2063b" containerName="oc" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.705019 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe6b94f-0c7a-4e12-8e80-36f817c2063b" containerName="oc" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.705044 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccec52af-4ae3-42de-bead-6b28a6e8c739" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.705651 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm"] Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.705734 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.714119 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.714196 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.714359 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.714573 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.715001 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.715238 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.760351 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m77fj\" (UniqueName: \"kubernetes.io/projected/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-kube-api-access-m77fj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.760634 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.760709 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.760731 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.760799 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.760825 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.862738 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m77fj\" (UniqueName: \"kubernetes.io/projected/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-kube-api-access-m77fj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.862792 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.862859 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.862897 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.862979 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.863010 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.869731 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.869923 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.870345 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.872212 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.874419 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:40 crc kubenswrapper[4897]: I0228 13:58:40.892900 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m77fj\" (UniqueName: \"kubernetes.io/projected/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-kube-api-access-m77fj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:41 crc kubenswrapper[4897]: I0228 13:58:41.037917 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:58:41 crc kubenswrapper[4897]: I0228 13:58:41.616734 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm"] Feb 28 13:58:42 crc kubenswrapper[4897]: I0228 13:58:42.587085 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" event={"ID":"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f","Type":"ContainerStarted","Data":"256bd5e07b2bca963fe4b30f526c01e31c32ffcfc939207246ff3180dcc46a2b"} Feb 28 13:58:42 crc kubenswrapper[4897]: I0228 13:58:42.587531 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" event={"ID":"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f","Type":"ContainerStarted","Data":"8b740df0cb1b7004ac257fb2152a78972f6b55d894b4effece8e39945eb7c3b1"} Feb 28 13:58:42 crc kubenswrapper[4897]: I0228 13:58:42.619422 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" podStartSLOduration=2.213576609 podStartE2EDuration="2.619393472s" podCreationTimestamp="2026-02-28 13:58:40 +0000 UTC" firstStartedPulling="2026-02-28 13:58:41.672794445 +0000 UTC m=+2535.915115102" lastFinishedPulling="2026-02-28 13:58:42.078611308 +0000 UTC m=+2536.320931965" observedRunningTime="2026-02-28 13:58:42.615443403 +0000 UTC m=+2536.857764090" watchObservedRunningTime="2026-02-28 13:58:42.619393472 +0000 UTC m=+2536.861714169" Feb 28 13:58:43 crc kubenswrapper[4897]: I0228 13:58:43.457014 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:58:43 crc kubenswrapper[4897]: E0228 13:58:43.457526 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:58:46 crc kubenswrapper[4897]: I0228 13:58:46.457158 4897 scope.go:117] "RemoveContainer" containerID="0d878fc6eb4f3e478512721e3560bf7a2bd1a288cfe668810dd17fb860df10f2" Feb 28 13:58:54 crc kubenswrapper[4897]: I0228 13:58:54.456731 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:58:54 crc kubenswrapper[4897]: E0228 13:58:54.457918 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:59:08 crc kubenswrapper[4897]: I0228 13:59:08.456488 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:59:08 crc kubenswrapper[4897]: E0228 13:59:08.457262 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:59:20 crc kubenswrapper[4897]: I0228 13:59:20.457514 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:59:20 crc kubenswrapper[4897]: E0228 13:59:20.459416 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 13:59:35 crc kubenswrapper[4897]: I0228 13:59:35.457534 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 13:59:36 crc kubenswrapper[4897]: I0228 13:59:36.183994 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"302c29decadd860cf90d5f15ef4f4562333e667b99af9bcb674b496f4e17ed16"} Feb 28 13:59:37 crc kubenswrapper[4897]: I0228 13:59:37.197093 4897 generic.go:334] "Generic (PLEG): container finished" podID="e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" containerID="256bd5e07b2bca963fe4b30f526c01e31c32ffcfc939207246ff3180dcc46a2b" exitCode=0 Feb 28 13:59:37 crc kubenswrapper[4897]: I0228 13:59:37.197218 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" event={"ID":"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f","Type":"ContainerDied","Data":"256bd5e07b2bca963fe4b30f526c01e31c32ffcfc939207246ff3180dcc46a2b"} Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.749103 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.911396 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-inventory\") pod \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.911461 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-metadata-combined-ca-bundle\") pod \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.911503 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.911543 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-nova-metadata-neutron-config-0\") pod \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.911660 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-ssh-key-openstack-edpm-ipam\") pod \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.911735 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m77fj\" (UniqueName: \"kubernetes.io/projected/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-kube-api-access-m77fj\") pod \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\" (UID: \"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f\") " Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.919806 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" (UID: "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.920555 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-kube-api-access-m77fj" (OuterVolumeSpecName: "kube-api-access-m77fj") pod "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" (UID: "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f"). InnerVolumeSpecName "kube-api-access-m77fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.959651 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-inventory" (OuterVolumeSpecName: "inventory") pod "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" (UID: "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.973801 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" (UID: "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.975778 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" (UID: "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:59:38 crc kubenswrapper[4897]: I0228 13:59:38.976178 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" (UID: "e41a407d-96e5-4c5d-8890-fe4cb2f59a0f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.014936 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.014988 4897 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.015009 4897 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.015030 4897 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.015049 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.015068 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m77fj\" (UniqueName: \"kubernetes.io/projected/e41a407d-96e5-4c5d-8890-fe4cb2f59a0f-kube-api-access-m77fj\") on node \"crc\" DevicePath \"\"" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.227240 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" event={"ID":"e41a407d-96e5-4c5d-8890-fe4cb2f59a0f","Type":"ContainerDied","Data":"8b740df0cb1b7004ac257fb2152a78972f6b55d894b4effece8e39945eb7c3b1"} Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.227281 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b740df0cb1b7004ac257fb2152a78972f6b55d894b4effece8e39945eb7c3b1" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.227387 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.447512 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc"] Feb 28 13:59:39 crc kubenswrapper[4897]: E0228 13:59:39.448019 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.448041 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.448288 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41a407d-96e5-4c5d-8890-fe4cb2f59a0f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.449186 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.455516 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.455665 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.455879 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.456012 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.456134 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.485621 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc"] Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.528868 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqzw\" (UniqueName: \"kubernetes.io/projected/ff698979-3e20-4b13-9cae-2b0d353cae40-kube-api-access-2jqzw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.529201 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.529277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.532429 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.532739 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.634978 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jqzw\" (UniqueName: \"kubernetes.io/projected/ff698979-3e20-4b13-9cae-2b0d353cae40-kube-api-access-2jqzw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.635606 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.635715 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.635855 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.636038 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.640650 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.641719 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.641974 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.644750 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.668938 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jqzw\" (UniqueName: \"kubernetes.io/projected/ff698979-3e20-4b13-9cae-2b0d353cae40-kube-api-access-2jqzw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-497mc\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:39 crc kubenswrapper[4897]: I0228 13:59:39.788377 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 13:59:40 crc kubenswrapper[4897]: I0228 13:59:40.413764 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc"] Feb 28 13:59:40 crc kubenswrapper[4897]: W0228 13:59:40.420521 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff698979_3e20_4b13_9cae_2b0d353cae40.slice/crio-17597fd44110a1932146b771085bdcf32bb8a8180f1d7b3c03de4dc36d8bce45 WatchSource:0}: Error finding container 17597fd44110a1932146b771085bdcf32bb8a8180f1d7b3c03de4dc36d8bce45: Status 404 returned error can't find the container with id 17597fd44110a1932146b771085bdcf32bb8a8180f1d7b3c03de4dc36d8bce45 Feb 28 13:59:41 crc kubenswrapper[4897]: I0228 13:59:41.267550 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" event={"ID":"ff698979-3e20-4b13-9cae-2b0d353cae40","Type":"ContainerStarted","Data":"72a13b7b87f9a1b45687272102c38655858d6c6d73524124aeacd06fbefb44df"} Feb 28 13:59:41 crc kubenswrapper[4897]: I0228 13:59:41.270393 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" event={"ID":"ff698979-3e20-4b13-9cae-2b0d353cae40","Type":"ContainerStarted","Data":"17597fd44110a1932146b771085bdcf32bb8a8180f1d7b3c03de4dc36d8bce45"} Feb 28 13:59:41 crc kubenswrapper[4897]: I0228 13:59:41.298041 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" podStartSLOduration=1.906506692 podStartE2EDuration="2.298018869s" podCreationTimestamp="2026-02-28 13:59:39 +0000 UTC" firstStartedPulling="2026-02-28 13:59:40.424229185 +0000 UTC m=+2594.666549872" lastFinishedPulling="2026-02-28 13:59:40.815741352 +0000 UTC m=+2595.058062049" observedRunningTime="2026-02-28 13:59:41.286746058 +0000 UTC m=+2595.529066765" watchObservedRunningTime="2026-02-28 13:59:41.298018869 +0000 UTC m=+2595.540339536" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.179388 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538120-qpdgq"] Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.183599 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538120-qpdgq" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.187370 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.188995 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.190704 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.193300 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538120-qpdgq"] Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.264684 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9"] Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.266466 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.268657 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.268886 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.288628 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9"] Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.296570 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2vs\" (UniqueName: \"kubernetes.io/projected/7d8f3a5c-cce6-4eea-b2de-293d5f8d9288-kube-api-access-hx2vs\") pod \"auto-csr-approver-29538120-qpdgq\" (UID: \"7d8f3a5c-cce6-4eea-b2de-293d5f8d9288\") " pod="openshift-infra/auto-csr-approver-29538120-qpdgq" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.398573 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-secret-volume\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.398701 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2vs\" (UniqueName: \"kubernetes.io/projected/7d8f3a5c-cce6-4eea-b2de-293d5f8d9288-kube-api-access-hx2vs\") pod \"auto-csr-approver-29538120-qpdgq\" (UID: \"7d8f3a5c-cce6-4eea-b2de-293d5f8d9288\") " pod="openshift-infra/auto-csr-approver-29538120-qpdgq" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.398730 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-config-volume\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.398807 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn295\" (UniqueName: \"kubernetes.io/projected/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-kube-api-access-bn295\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.421924 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2vs\" (UniqueName: \"kubernetes.io/projected/7d8f3a5c-cce6-4eea-b2de-293d5f8d9288-kube-api-access-hx2vs\") pod \"auto-csr-approver-29538120-qpdgq\" (UID: \"7d8f3a5c-cce6-4eea-b2de-293d5f8d9288\") " pod="openshift-infra/auto-csr-approver-29538120-qpdgq" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.500699 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-config-volume\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.501119 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn295\" (UniqueName: \"kubernetes.io/projected/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-kube-api-access-bn295\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.501221 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-secret-volume\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.502276 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-config-volume\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.504445 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-secret-volume\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.516587 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538120-qpdgq" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.528794 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn295\" (UniqueName: \"kubernetes.io/projected/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-kube-api-access-bn295\") pod \"collect-profiles-29538120-qt7b9\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:00 crc kubenswrapper[4897]: I0228 14:00:00.591111 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:01 crc kubenswrapper[4897]: I0228 14:00:01.058038 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538120-qpdgq"] Feb 28 14:00:01 crc kubenswrapper[4897]: I0228 14:00:01.162832 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9"] Feb 28 14:00:01 crc kubenswrapper[4897]: I0228 14:00:01.501906 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" event={"ID":"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e","Type":"ContainerStarted","Data":"8899aaa614bf9310b9997084d8aa7e53586a3192b878e9e0385128c2ef7976a4"} Feb 28 14:00:01 crc kubenswrapper[4897]: I0228 14:00:01.502347 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" event={"ID":"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e","Type":"ContainerStarted","Data":"c5c087108c5f3bb5e05568465afe652603b96151c0a70523934d29928e19731b"} Feb 28 14:00:01 crc kubenswrapper[4897]: I0228 14:00:01.506178 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538120-qpdgq" event={"ID":"7d8f3a5c-cce6-4eea-b2de-293d5f8d9288","Type":"ContainerStarted","Data":"a55566172f173428895bebcbc18d42ba3a9a22d77e9dfbad17cca21b392e732a"} Feb 28 14:00:01 crc kubenswrapper[4897]: I0228 14:00:01.531964 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" podStartSLOduration=1.531946968 podStartE2EDuration="1.531946968s" podCreationTimestamp="2026-02-28 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 14:00:01.52948841 +0000 UTC m=+2615.771809067" watchObservedRunningTime="2026-02-28 14:00:01.531946968 +0000 UTC m=+2615.774267625" Feb 28 14:00:02 crc kubenswrapper[4897]: I0228 14:00:02.518541 4897 generic.go:334] "Generic (PLEG): container finished" podID="d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e" containerID="8899aaa614bf9310b9997084d8aa7e53586a3192b878e9e0385128c2ef7976a4" exitCode=0 Feb 28 14:00:02 crc kubenswrapper[4897]: I0228 14:00:02.518670 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" event={"ID":"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e","Type":"ContainerDied","Data":"8899aaa614bf9310b9997084d8aa7e53586a3192b878e9e0385128c2ef7976a4"} Feb 28 14:00:03 crc kubenswrapper[4897]: I0228 14:00:03.940011 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.069922 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-secret-volume\") pod \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.070161 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-config-volume\") pod \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.070207 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn295\" (UniqueName: \"kubernetes.io/projected/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-kube-api-access-bn295\") pod \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\" (UID: \"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e\") " Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.071148 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-config-volume" (OuterVolumeSpecName: "config-volume") pod "d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e" (UID: "d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.077255 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-kube-api-access-bn295" (OuterVolumeSpecName: "kube-api-access-bn295") pod "d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e" (UID: "d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e"). InnerVolumeSpecName "kube-api-access-bn295". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.077299 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e" (UID: "d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.173219 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.173262 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn295\" (UniqueName: \"kubernetes.io/projected/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-kube-api-access-bn295\") on node \"crc\" DevicePath \"\"" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.173280 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.545476 4897 generic.go:334] "Generic (PLEG): container finished" podID="7d8f3a5c-cce6-4eea-b2de-293d5f8d9288" containerID="6ac879a950b3e8cc9209504f481d9bef158ec252038ccddeeff6fc6a13e53bfb" exitCode=0 Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.545583 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538120-qpdgq" event={"ID":"7d8f3a5c-cce6-4eea-b2de-293d5f8d9288","Type":"ContainerDied","Data":"6ac879a950b3e8cc9209504f481d9bef158ec252038ccddeeff6fc6a13e53bfb"} Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.548436 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" event={"ID":"d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e","Type":"ContainerDied","Data":"c5c087108c5f3bb5e05568465afe652603b96151c0a70523934d29928e19731b"} Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.548475 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5c087108c5f3bb5e05568465afe652603b96151c0a70523934d29928e19731b" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.548491 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9" Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.632790 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824"] Feb 28 14:00:04 crc kubenswrapper[4897]: I0228 14:00:04.641375 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538075-ts824"] Feb 28 14:00:05 crc kubenswrapper[4897]: I0228 14:00:05.943346 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538120-qpdgq" Feb 28 14:00:06 crc kubenswrapper[4897]: I0228 14:00:06.011106 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2vs\" (UniqueName: \"kubernetes.io/projected/7d8f3a5c-cce6-4eea-b2de-293d5f8d9288-kube-api-access-hx2vs\") pod \"7d8f3a5c-cce6-4eea-b2de-293d5f8d9288\" (UID: \"7d8f3a5c-cce6-4eea-b2de-293d5f8d9288\") " Feb 28 14:00:06 crc kubenswrapper[4897]: I0228 14:00:06.023512 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8f3a5c-cce6-4eea-b2de-293d5f8d9288-kube-api-access-hx2vs" (OuterVolumeSpecName: "kube-api-access-hx2vs") pod "7d8f3a5c-cce6-4eea-b2de-293d5f8d9288" (UID: "7d8f3a5c-cce6-4eea-b2de-293d5f8d9288"). InnerVolumeSpecName "kube-api-access-hx2vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:00:06 crc kubenswrapper[4897]: I0228 14:00:06.114288 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2vs\" (UniqueName: \"kubernetes.io/projected/7d8f3a5c-cce6-4eea-b2de-293d5f8d9288-kube-api-access-hx2vs\") on node \"crc\" DevicePath \"\"" Feb 28 14:00:06 crc kubenswrapper[4897]: I0228 14:00:06.479048 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1" path="/var/lib/kubelet/pods/b3dcfd2e-5074-4d1f-88b3-4aa34c63c3d1/volumes" Feb 28 14:00:06 crc kubenswrapper[4897]: I0228 14:00:06.581652 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538120-qpdgq" event={"ID":"7d8f3a5c-cce6-4eea-b2de-293d5f8d9288","Type":"ContainerDied","Data":"a55566172f173428895bebcbc18d42ba3a9a22d77e9dfbad17cca21b392e732a"} Feb 28 14:00:06 crc kubenswrapper[4897]: I0228 14:00:06.581704 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a55566172f173428895bebcbc18d42ba3a9a22d77e9dfbad17cca21b392e732a" Feb 28 14:00:06 crc kubenswrapper[4897]: I0228 14:00:06.581735 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538120-qpdgq" Feb 28 14:00:07 crc kubenswrapper[4897]: I0228 14:00:07.023257 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538114-fhwzf"] Feb 28 14:00:07 crc kubenswrapper[4897]: I0228 14:00:07.032454 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538114-fhwzf"] Feb 28 14:00:08 crc kubenswrapper[4897]: I0228 14:00:08.474367 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7" path="/var/lib/kubelet/pods/9b8ed1ff-c04b-483a-b3ac-21e862d6b5f7/volumes" Feb 28 14:00:46 crc kubenswrapper[4897]: I0228 14:00:46.588775 4897 scope.go:117] "RemoveContainer" containerID="4b6e793d221218556bd5e1f277096807ef26420eb39f14ae322206c1413b84c5" Feb 28 14:00:46 crc kubenswrapper[4897]: I0228 14:00:46.618187 4897 scope.go:117] "RemoveContainer" containerID="6ad5828a8bebd288c060af553f20e731d13d0338bf7dfb913456e608ea62a8d1" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.188527 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29538121-psnhm"] Feb 28 14:01:00 crc kubenswrapper[4897]: E0228 14:01:00.190957 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e" containerName="collect-profiles" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.190978 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e" containerName="collect-profiles" Feb 28 14:01:00 crc kubenswrapper[4897]: E0228 14:01:00.190992 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8f3a5c-cce6-4eea-b2de-293d5f8d9288" containerName="oc" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.190999 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8f3a5c-cce6-4eea-b2de-293d5f8d9288" containerName="oc" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.191267 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8f3a5c-cce6-4eea-b2de-293d5f8d9288" containerName="oc" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.191287 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e" containerName="collect-profiles" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.192093 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.218244 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29538121-psnhm"] Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.330523 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-896jc\" (UniqueName: \"kubernetes.io/projected/24ea6562-040d-4eb4-865b-692acf8b2a46-kube-api-access-896jc\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.330930 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-config-data\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.331243 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-fernet-keys\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.331478 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-combined-ca-bundle\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.434367 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-config-data\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.434518 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-fernet-keys\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.434580 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-combined-ca-bundle\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.434751 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-896jc\" (UniqueName: \"kubernetes.io/projected/24ea6562-040d-4eb4-865b-692acf8b2a46-kube-api-access-896jc\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.444751 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-config-data\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.444809 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-fernet-keys\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.447601 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-combined-ca-bundle\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.465509 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-896jc\" (UniqueName: \"kubernetes.io/projected/24ea6562-040d-4eb4-865b-692acf8b2a46-kube-api-access-896jc\") pod \"keystone-cron-29538121-psnhm\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:00 crc kubenswrapper[4897]: I0228 14:01:00.531656 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:01 crc kubenswrapper[4897]: W0228 14:01:01.037291 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24ea6562_040d_4eb4_865b_692acf8b2a46.slice/crio-a76c85bdf161336e41c56d2b0d2a8bc6c466f23edeeb1bb78c5ce81a026ddaf9 WatchSource:0}: Error finding container a76c85bdf161336e41c56d2b0d2a8bc6c466f23edeeb1bb78c5ce81a026ddaf9: Status 404 returned error can't find the container with id a76c85bdf161336e41c56d2b0d2a8bc6c466f23edeeb1bb78c5ce81a026ddaf9 Feb 28 14:01:01 crc kubenswrapper[4897]: I0228 14:01:01.037540 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29538121-psnhm"] Feb 28 14:01:01 crc kubenswrapper[4897]: I0228 14:01:01.269103 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29538121-psnhm" event={"ID":"24ea6562-040d-4eb4-865b-692acf8b2a46","Type":"ContainerStarted","Data":"9f08122313f7801920984c8bc42e783cd96263ef16c11ff26aa4753a6f704447"} Feb 28 14:01:01 crc kubenswrapper[4897]: I0228 14:01:01.269646 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29538121-psnhm" event={"ID":"24ea6562-040d-4eb4-865b-692acf8b2a46","Type":"ContainerStarted","Data":"a76c85bdf161336e41c56d2b0d2a8bc6c466f23edeeb1bb78c5ce81a026ddaf9"} Feb 28 14:01:01 crc kubenswrapper[4897]: I0228 14:01:01.298715 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29538121-psnhm" podStartSLOduration=1.298697837 podStartE2EDuration="1.298697837s" podCreationTimestamp="2026-02-28 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 14:01:01.290118209 +0000 UTC m=+2675.532438906" watchObservedRunningTime="2026-02-28 14:01:01.298697837 +0000 UTC m=+2675.541018494" Feb 28 14:01:04 crc kubenswrapper[4897]: I0228 14:01:04.302148 4897 generic.go:334] "Generic (PLEG): container finished" podID="24ea6562-040d-4eb4-865b-692acf8b2a46" containerID="9f08122313f7801920984c8bc42e783cd96263ef16c11ff26aa4753a6f704447" exitCode=0 Feb 28 14:01:04 crc kubenswrapper[4897]: I0228 14:01:04.302291 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29538121-psnhm" event={"ID":"24ea6562-040d-4eb4-865b-692acf8b2a46","Type":"ContainerDied","Data":"9f08122313f7801920984c8bc42e783cd96263ef16c11ff26aa4753a6f704447"} Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.665989 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.755612 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-combined-ca-bundle\") pod \"24ea6562-040d-4eb4-865b-692acf8b2a46\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.755916 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-config-data\") pod \"24ea6562-040d-4eb4-865b-692acf8b2a46\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.755993 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-fernet-keys\") pod \"24ea6562-040d-4eb4-865b-692acf8b2a46\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.756018 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-896jc\" (UniqueName: \"kubernetes.io/projected/24ea6562-040d-4eb4-865b-692acf8b2a46-kube-api-access-896jc\") pod \"24ea6562-040d-4eb4-865b-692acf8b2a46\" (UID: \"24ea6562-040d-4eb4-865b-692acf8b2a46\") " Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.762628 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "24ea6562-040d-4eb4-865b-692acf8b2a46" (UID: "24ea6562-040d-4eb4-865b-692acf8b2a46"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.764728 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ea6562-040d-4eb4-865b-692acf8b2a46-kube-api-access-896jc" (OuterVolumeSpecName: "kube-api-access-896jc") pod "24ea6562-040d-4eb4-865b-692acf8b2a46" (UID: "24ea6562-040d-4eb4-865b-692acf8b2a46"). InnerVolumeSpecName "kube-api-access-896jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.785578 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24ea6562-040d-4eb4-865b-692acf8b2a46" (UID: "24ea6562-040d-4eb4-865b-692acf8b2a46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.846558 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-config-data" (OuterVolumeSpecName: "config-data") pod "24ea6562-040d-4eb4-865b-692acf8b2a46" (UID: "24ea6562-040d-4eb4-865b-692acf8b2a46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.857849 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.857881 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-896jc\" (UniqueName: \"kubernetes.io/projected/24ea6562-040d-4eb4-865b-692acf8b2a46-kube-api-access-896jc\") on node \"crc\" DevicePath \"\"" Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.857891 4897 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 14:01:05 crc kubenswrapper[4897]: I0228 14:01:05.857902 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ea6562-040d-4eb4-865b-692acf8b2a46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 14:01:06 crc kubenswrapper[4897]: I0228 14:01:06.326560 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29538121-psnhm" event={"ID":"24ea6562-040d-4eb4-865b-692acf8b2a46","Type":"ContainerDied","Data":"a76c85bdf161336e41c56d2b0d2a8bc6c466f23edeeb1bb78c5ce81a026ddaf9"} Feb 28 14:01:06 crc kubenswrapper[4897]: I0228 14:01:06.326942 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76c85bdf161336e41c56d2b0d2a8bc6c466f23edeeb1bb78c5ce81a026ddaf9" Feb 28 14:01:06 crc kubenswrapper[4897]: I0228 14:01:06.326622 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29538121-psnhm" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.513072 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xfwtn"] Feb 28 14:01:32 crc kubenswrapper[4897]: E0228 14:01:32.514389 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ea6562-040d-4eb4-865b-692acf8b2a46" containerName="keystone-cron" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.514412 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ea6562-040d-4eb4-865b-692acf8b2a46" containerName="keystone-cron" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.514854 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ea6562-040d-4eb4-865b-692acf8b2a46" containerName="keystone-cron" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.517157 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.527787 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xfwtn"] Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.581598 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-catalog-content\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.581651 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrgh\" (UniqueName: \"kubernetes.io/projected/63e1ea28-bbdc-427c-98f6-41bf30ecf060-kube-api-access-7xrgh\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.581830 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-utilities\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.684198 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-utilities\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.684259 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-catalog-content\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.684284 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrgh\" (UniqueName: \"kubernetes.io/projected/63e1ea28-bbdc-427c-98f6-41bf30ecf060-kube-api-access-7xrgh\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.684675 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-utilities\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.684723 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-catalog-content\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.704254 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrgh\" (UniqueName: \"kubernetes.io/projected/63e1ea28-bbdc-427c-98f6-41bf30ecf060-kube-api-access-7xrgh\") pod \"redhat-operators-xfwtn\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:32 crc kubenswrapper[4897]: I0228 14:01:32.841050 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:01:33 crc kubenswrapper[4897]: I0228 14:01:33.321330 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xfwtn"] Feb 28 14:01:33 crc kubenswrapper[4897]: I0228 14:01:33.674793 4897 generic.go:334] "Generic (PLEG): container finished" podID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerID="f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb" exitCode=0 Feb 28 14:01:33 crc kubenswrapper[4897]: I0228 14:01:33.674841 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfwtn" event={"ID":"63e1ea28-bbdc-427c-98f6-41bf30ecf060","Type":"ContainerDied","Data":"f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb"} Feb 28 14:01:33 crc kubenswrapper[4897]: I0228 14:01:33.674899 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfwtn" event={"ID":"63e1ea28-bbdc-427c-98f6-41bf30ecf060","Type":"ContainerStarted","Data":"bd6e99e32ff93da53d7736174f716b95b49dc4bcf2440c3b00905c631eac6bc2"} Feb 28 14:01:33 crc kubenswrapper[4897]: I0228 14:01:33.677048 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:01:34 crc kubenswrapper[4897]: E0228 14:01:34.319893 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 14:01:34 crc kubenswrapper[4897]: E0228 14:01:34.320394 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xrgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xfwtn_openshift-marketplace(63e1ea28-bbdc-427c-98f6-41bf30ecf060): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:01:34 crc kubenswrapper[4897]: E0228 14:01:34.321632 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-xfwtn" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" Feb 28 14:01:34 crc kubenswrapper[4897]: E0228 14:01:34.690564 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xfwtn" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" Feb 28 14:01:48 crc kubenswrapper[4897]: E0228 14:01:48.073459 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 14:01:48 crc kubenswrapper[4897]: E0228 14:01:48.074116 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xrgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xfwtn_openshift-marketplace(63e1ea28-bbdc-427c-98f6-41bf30ecf060): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:01:48 crc kubenswrapper[4897]: E0228 14:01:48.075360 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-xfwtn" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.182275 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538122-wsj4k"] Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.185237 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538122-wsj4k" Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.189244 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.189666 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.189788 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.196798 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538122-wsj4k"] Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.223552 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6w6f\" (UniqueName: \"kubernetes.io/projected/fd80f42b-c46b-4599-9fe2-00993454a32c-kube-api-access-m6w6f\") pod \"auto-csr-approver-29538122-wsj4k\" (UID: \"fd80f42b-c46b-4599-9fe2-00993454a32c\") " pod="openshift-infra/auto-csr-approver-29538122-wsj4k" Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.325409 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6w6f\" (UniqueName: \"kubernetes.io/projected/fd80f42b-c46b-4599-9fe2-00993454a32c-kube-api-access-m6w6f\") pod \"auto-csr-approver-29538122-wsj4k\" (UID: \"fd80f42b-c46b-4599-9fe2-00993454a32c\") " pod="openshift-infra/auto-csr-approver-29538122-wsj4k" Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.350947 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6w6f\" (UniqueName: \"kubernetes.io/projected/fd80f42b-c46b-4599-9fe2-00993454a32c-kube-api-access-m6w6f\") pod \"auto-csr-approver-29538122-wsj4k\" (UID: \"fd80f42b-c46b-4599-9fe2-00993454a32c\") " pod="openshift-infra/auto-csr-approver-29538122-wsj4k" Feb 28 14:02:00 crc kubenswrapper[4897]: I0228 14:02:00.515226 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538122-wsj4k" Feb 28 14:02:01 crc kubenswrapper[4897]: I0228 14:02:01.064928 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538122-wsj4k"] Feb 28 14:02:02 crc kubenswrapper[4897]: I0228 14:02:02.045576 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538122-wsj4k" event={"ID":"fd80f42b-c46b-4599-9fe2-00993454a32c","Type":"ContainerStarted","Data":"c33467d5c58257756b257805bbc2bf3c7b9e4470e9f0e9a9e230802023b5461b"} Feb 28 14:02:03 crc kubenswrapper[4897]: I0228 14:02:03.055782 4897 generic.go:334] "Generic (PLEG): container finished" podID="fd80f42b-c46b-4599-9fe2-00993454a32c" containerID="d53dba889f3efc44406212325d2c49824f5e648ac67ac62d9ff8022a7ac2b54b" exitCode=0 Feb 28 14:02:03 crc kubenswrapper[4897]: I0228 14:02:03.055868 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538122-wsj4k" event={"ID":"fd80f42b-c46b-4599-9fe2-00993454a32c","Type":"ContainerDied","Data":"d53dba889f3efc44406212325d2c49824f5e648ac67ac62d9ff8022a7ac2b54b"} Feb 28 14:02:03 crc kubenswrapper[4897]: I0228 14:02:03.371279 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:02:03 crc kubenswrapper[4897]: I0228 14:02:03.371579 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:02:03 crc kubenswrapper[4897]: E0228 14:02:03.475801 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xfwtn" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" Feb 28 14:02:04 crc kubenswrapper[4897]: I0228 14:02:04.455983 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538122-wsj4k" Feb 28 14:02:04 crc kubenswrapper[4897]: I0228 14:02:04.527444 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6w6f\" (UniqueName: \"kubernetes.io/projected/fd80f42b-c46b-4599-9fe2-00993454a32c-kube-api-access-m6w6f\") pod \"fd80f42b-c46b-4599-9fe2-00993454a32c\" (UID: \"fd80f42b-c46b-4599-9fe2-00993454a32c\") " Feb 28 14:02:04 crc kubenswrapper[4897]: I0228 14:02:04.536160 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd80f42b-c46b-4599-9fe2-00993454a32c-kube-api-access-m6w6f" (OuterVolumeSpecName: "kube-api-access-m6w6f") pod "fd80f42b-c46b-4599-9fe2-00993454a32c" (UID: "fd80f42b-c46b-4599-9fe2-00993454a32c"). InnerVolumeSpecName "kube-api-access-m6w6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:02:04 crc kubenswrapper[4897]: I0228 14:02:04.631595 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6w6f\" (UniqueName: \"kubernetes.io/projected/fd80f42b-c46b-4599-9fe2-00993454a32c-kube-api-access-m6w6f\") on node \"crc\" DevicePath \"\"" Feb 28 14:02:05 crc kubenswrapper[4897]: I0228 14:02:05.084848 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538122-wsj4k" event={"ID":"fd80f42b-c46b-4599-9fe2-00993454a32c","Type":"ContainerDied","Data":"c33467d5c58257756b257805bbc2bf3c7b9e4470e9f0e9a9e230802023b5461b"} Feb 28 14:02:05 crc kubenswrapper[4897]: I0228 14:02:05.084921 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c33467d5c58257756b257805bbc2bf3c7b9e4470e9f0e9a9e230802023b5461b" Feb 28 14:02:05 crc kubenswrapper[4897]: I0228 14:02:05.084937 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538122-wsj4k" Feb 28 14:02:05 crc kubenswrapper[4897]: I0228 14:02:05.560111 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538116-hqwjz"] Feb 28 14:02:05 crc kubenswrapper[4897]: I0228 14:02:05.575846 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538116-hqwjz"] Feb 28 14:02:06 crc kubenswrapper[4897]: I0228 14:02:06.470874 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94669ad4-7c91-4a5a-b8d3-3b62b154ce57" path="/var/lib/kubelet/pods/94669ad4-7c91-4a5a-b8d3-3b62b154ce57/volumes" Feb 28 14:02:19 crc kubenswrapper[4897]: I0228 14:02:19.301349 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfwtn" event={"ID":"63e1ea28-bbdc-427c-98f6-41bf30ecf060","Type":"ContainerStarted","Data":"cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a"} Feb 28 14:02:24 crc kubenswrapper[4897]: I0228 14:02:24.355634 4897 generic.go:334] "Generic (PLEG): container finished" podID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerID="cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a" exitCode=0 Feb 28 14:02:24 crc kubenswrapper[4897]: I0228 14:02:24.355753 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfwtn" event={"ID":"63e1ea28-bbdc-427c-98f6-41bf30ecf060","Type":"ContainerDied","Data":"cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a"} Feb 28 14:02:25 crc kubenswrapper[4897]: I0228 14:02:25.370920 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfwtn" event={"ID":"63e1ea28-bbdc-427c-98f6-41bf30ecf060","Type":"ContainerStarted","Data":"566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173"} Feb 28 14:02:25 crc kubenswrapper[4897]: I0228 14:02:25.414618 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xfwtn" podStartSLOduration=2.315670547 podStartE2EDuration="53.414591644s" podCreationTimestamp="2026-02-28 14:01:32 +0000 UTC" firstStartedPulling="2026-02-28 14:01:33.6768271 +0000 UTC m=+2707.919147757" lastFinishedPulling="2026-02-28 14:02:24.775748167 +0000 UTC m=+2759.018068854" observedRunningTime="2026-02-28 14:02:25.397146892 +0000 UTC m=+2759.639467559" watchObservedRunningTime="2026-02-28 14:02:25.414591644 +0000 UTC m=+2759.656912331" Feb 28 14:02:29 crc kubenswrapper[4897]: I0228 14:02:29.935413 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qmq64"] Feb 28 14:02:29 crc kubenswrapper[4897]: E0228 14:02:29.936684 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd80f42b-c46b-4599-9fe2-00993454a32c" containerName="oc" Feb 28 14:02:29 crc kubenswrapper[4897]: I0228 14:02:29.936706 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd80f42b-c46b-4599-9fe2-00993454a32c" containerName="oc" Feb 28 14:02:29 crc kubenswrapper[4897]: I0228 14:02:29.937069 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd80f42b-c46b-4599-9fe2-00993454a32c" containerName="oc" Feb 28 14:02:29 crc kubenswrapper[4897]: I0228 14:02:29.939462 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:29 crc kubenswrapper[4897]: I0228 14:02:29.947094 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmq64"] Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.033996 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-utilities\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.034283 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t994g\" (UniqueName: \"kubernetes.io/projected/35894d31-dc84-4b14-9a5a-08e0bc50ea11-kube-api-access-t994g\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.034363 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-catalog-content\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.135617 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t994g\" (UniqueName: \"kubernetes.io/projected/35894d31-dc84-4b14-9a5a-08e0bc50ea11-kube-api-access-t994g\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.135667 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-catalog-content\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.135728 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-utilities\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.136225 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-catalog-content\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.136246 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-utilities\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.176611 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t994g\" (UniqueName: \"kubernetes.io/projected/35894d31-dc84-4b14-9a5a-08e0bc50ea11-kube-api-access-t994g\") pod \"community-operators-qmq64\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.289925 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:02:30 crc kubenswrapper[4897]: I0228 14:02:30.829137 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmq64"] Feb 28 14:02:31 crc kubenswrapper[4897]: I0228 14:02:31.435841 4897 generic.go:334] "Generic (PLEG): container finished" podID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerID="042673c4cd91c17244c4ee5ceab9ee68184ada512bceb689de9ed271397d4e25" exitCode=0 Feb 28 14:02:31 crc kubenswrapper[4897]: I0228 14:02:31.435917 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmq64" event={"ID":"35894d31-dc84-4b14-9a5a-08e0bc50ea11","Type":"ContainerDied","Data":"042673c4cd91c17244c4ee5ceab9ee68184ada512bceb689de9ed271397d4e25"} Feb 28 14:02:31 crc kubenswrapper[4897]: I0228 14:02:31.436201 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmq64" event={"ID":"35894d31-dc84-4b14-9a5a-08e0bc50ea11","Type":"ContainerStarted","Data":"133df5054e1cbe553935df7091451be01ccc1747ceedcc21811c65947562450d"} Feb 28 14:02:32 crc kubenswrapper[4897]: E0228 14:02:32.051116 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:02:32 crc kubenswrapper[4897]: E0228 14:02:32.051294 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t994g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qmq64_openshift-marketplace(35894d31-dc84-4b14-9a5a-08e0bc50ea11): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:02:32 crc kubenswrapper[4897]: E0228 14:02:32.053379 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" Feb 28 14:02:32 crc kubenswrapper[4897]: E0228 14:02:32.451271 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" Feb 28 14:02:32 crc kubenswrapper[4897]: I0228 14:02:32.841361 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:02:32 crc kubenswrapper[4897]: I0228 14:02:32.841694 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:02:33 crc kubenswrapper[4897]: I0228 14:02:33.370829 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:02:33 crc kubenswrapper[4897]: I0228 14:02:33.370923 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:02:33 crc kubenswrapper[4897]: I0228 14:02:33.912615 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xfwtn" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="registry-server" probeResult="failure" output=< Feb 28 14:02:33 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:02:33 crc kubenswrapper[4897]: > Feb 28 14:02:43 crc kubenswrapper[4897]: I0228 14:02:43.904745 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xfwtn" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="registry-server" probeResult="failure" output=< Feb 28 14:02:43 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:02:43 crc kubenswrapper[4897]: > Feb 28 14:02:46 crc kubenswrapper[4897]: E0228 14:02:46.062435 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:02:46 crc kubenswrapper[4897]: E0228 14:02:46.062964 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t994g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qmq64_openshift-marketplace(35894d31-dc84-4b14-9a5a-08e0bc50ea11): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:02:46 crc kubenswrapper[4897]: E0228 14:02:46.064228 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" Feb 28 14:02:46 crc kubenswrapper[4897]: I0228 14:02:46.778961 4897 scope.go:117] "RemoveContainer" containerID="cc7a1378c3f5453fe3574630478a94e9fa202a3e65488c7424520bd18fd20234" Feb 28 14:02:52 crc kubenswrapper[4897]: I0228 14:02:52.951795 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:02:53 crc kubenswrapper[4897]: I0228 14:02:53.029689 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:02:53 crc kubenswrapper[4897]: I0228 14:02:53.209887 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xfwtn"] Feb 28 14:02:54 crc kubenswrapper[4897]: I0228 14:02:54.709051 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xfwtn" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="registry-server" containerID="cri-o://566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173" gracePeriod=2 Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.322718 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.435607 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xrgh\" (UniqueName: \"kubernetes.io/projected/63e1ea28-bbdc-427c-98f6-41bf30ecf060-kube-api-access-7xrgh\") pod \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.435825 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-catalog-content\") pod \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.435912 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-utilities\") pod \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\" (UID: \"63e1ea28-bbdc-427c-98f6-41bf30ecf060\") " Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.436598 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-utilities" (OuterVolumeSpecName: "utilities") pod "63e1ea28-bbdc-427c-98f6-41bf30ecf060" (UID: "63e1ea28-bbdc-427c-98f6-41bf30ecf060"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.445114 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e1ea28-bbdc-427c-98f6-41bf30ecf060-kube-api-access-7xrgh" (OuterVolumeSpecName: "kube-api-access-7xrgh") pod "63e1ea28-bbdc-427c-98f6-41bf30ecf060" (UID: "63e1ea28-bbdc-427c-98f6-41bf30ecf060"). InnerVolumeSpecName "kube-api-access-7xrgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.539416 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.539792 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xrgh\" (UniqueName: \"kubernetes.io/projected/63e1ea28-bbdc-427c-98f6-41bf30ecf060-kube-api-access-7xrgh\") on node \"crc\" DevicePath \"\"" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.609155 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63e1ea28-bbdc-427c-98f6-41bf30ecf060" (UID: "63e1ea28-bbdc-427c-98f6-41bf30ecf060"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.642336 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e1ea28-bbdc-427c-98f6-41bf30ecf060-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.721168 4897 generic.go:334] "Generic (PLEG): container finished" podID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerID="566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173" exitCode=0 Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.721217 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfwtn" event={"ID":"63e1ea28-bbdc-427c-98f6-41bf30ecf060","Type":"ContainerDied","Data":"566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173"} Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.721254 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfwtn" event={"ID":"63e1ea28-bbdc-427c-98f6-41bf30ecf060","Type":"ContainerDied","Data":"bd6e99e32ff93da53d7736174f716b95b49dc4bcf2440c3b00905c631eac6bc2"} Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.721272 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfwtn" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.721280 4897 scope.go:117] "RemoveContainer" containerID="566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.753697 4897 scope.go:117] "RemoveContainer" containerID="cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.783710 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xfwtn"] Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.793399 4897 scope.go:117] "RemoveContainer" containerID="f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.797590 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xfwtn"] Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.861845 4897 scope.go:117] "RemoveContainer" containerID="566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173" Feb 28 14:02:55 crc kubenswrapper[4897]: E0228 14:02:55.862507 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173\": container with ID starting with 566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173 not found: ID does not exist" containerID="566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.862556 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173"} err="failed to get container status \"566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173\": rpc error: code = NotFound desc = could not find container \"566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173\": container with ID starting with 566d28ac22d35357e145bbbc13753131de145d4df5694e372152443bf8fc4173 not found: ID does not exist" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.862590 4897 scope.go:117] "RemoveContainer" containerID="cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a" Feb 28 14:02:55 crc kubenswrapper[4897]: E0228 14:02:55.863163 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a\": container with ID starting with cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a not found: ID does not exist" containerID="cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.863211 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a"} err="failed to get container status \"cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a\": rpc error: code = NotFound desc = could not find container \"cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a\": container with ID starting with cd4161859b8e2b0926e81e8df3e43841b18149592b40f7ab0a1ff9afb4fc1a4a not found: ID does not exist" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.863244 4897 scope.go:117] "RemoveContainer" containerID="f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb" Feb 28 14:02:55 crc kubenswrapper[4897]: E0228 14:02:55.863672 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb\": container with ID starting with f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb not found: ID does not exist" containerID="f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb" Feb 28 14:02:55 crc kubenswrapper[4897]: I0228 14:02:55.863705 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb"} err="failed to get container status \"f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb\": rpc error: code = NotFound desc = could not find container \"f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb\": container with ID starting with f70a5822e1f88d575cc48d2c0e16c6361b67c41d89ca826459affcf3775e92fb not found: ID does not exist" Feb 28 14:02:56 crc kubenswrapper[4897]: I0228 14:02:56.479054 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" path="/var/lib/kubelet/pods/63e1ea28-bbdc-427c-98f6-41bf30ecf060/volumes" Feb 28 14:02:59 crc kubenswrapper[4897]: E0228 14:02:59.461781 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.370750 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.371522 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.371606 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.372650 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"302c29decadd860cf90d5f15ef4f4562333e667b99af9bcb674b496f4e17ed16"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.372761 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://302c29decadd860cf90d5f15ef4f4562333e667b99af9bcb674b496f4e17ed16" gracePeriod=600 Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.812401 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="302c29decadd860cf90d5f15ef4f4562333e667b99af9bcb674b496f4e17ed16" exitCode=0 Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.812503 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"302c29decadd860cf90d5f15ef4f4562333e667b99af9bcb674b496f4e17ed16"} Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.812926 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2"} Feb 28 14:03:03 crc kubenswrapper[4897]: I0228 14:03:03.812978 4897 scope.go:117] "RemoveContainer" containerID="1d7d52cd4a1b910a0cd15b040805df0f538d171dff5f4ea1ec66df8363a6047e" Feb 28 14:03:13 crc kubenswrapper[4897]: E0228 14:03:13.976937 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:03:13 crc kubenswrapper[4897]: E0228 14:03:13.978016 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t994g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qmq64_openshift-marketplace(35894d31-dc84-4b14-9a5a-08e0bc50ea11): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:03:13 crc kubenswrapper[4897]: E0228 14:03:13.979738 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" Feb 28 14:03:26 crc kubenswrapper[4897]: E0228 14:03:26.474677 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" Feb 28 14:03:38 crc kubenswrapper[4897]: E0228 14:03:38.460873 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" Feb 28 14:03:40 crc kubenswrapper[4897]: I0228 14:03:40.230081 4897 generic.go:334] "Generic (PLEG): container finished" podID="ff698979-3e20-4b13-9cae-2b0d353cae40" containerID="72a13b7b87f9a1b45687272102c38655858d6c6d73524124aeacd06fbefb44df" exitCode=0 Feb 28 14:03:40 crc kubenswrapper[4897]: I0228 14:03:40.230147 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" event={"ID":"ff698979-3e20-4b13-9cae-2b0d353cae40","Type":"ContainerDied","Data":"72a13b7b87f9a1b45687272102c38655858d6c6d73524124aeacd06fbefb44df"} Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.744744 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.890872 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jqzw\" (UniqueName: \"kubernetes.io/projected/ff698979-3e20-4b13-9cae-2b0d353cae40-kube-api-access-2jqzw\") pod \"ff698979-3e20-4b13-9cae-2b0d353cae40\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.890962 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-ssh-key-openstack-edpm-ipam\") pod \"ff698979-3e20-4b13-9cae-2b0d353cae40\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.891125 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-combined-ca-bundle\") pod \"ff698979-3e20-4b13-9cae-2b0d353cae40\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.891188 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-secret-0\") pod \"ff698979-3e20-4b13-9cae-2b0d353cae40\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.891220 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-inventory\") pod \"ff698979-3e20-4b13-9cae-2b0d353cae40\" (UID: \"ff698979-3e20-4b13-9cae-2b0d353cae40\") " Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.901227 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ff698979-3e20-4b13-9cae-2b0d353cae40" (UID: "ff698979-3e20-4b13-9cae-2b0d353cae40"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.901630 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff698979-3e20-4b13-9cae-2b0d353cae40-kube-api-access-2jqzw" (OuterVolumeSpecName: "kube-api-access-2jqzw") pod "ff698979-3e20-4b13-9cae-2b0d353cae40" (UID: "ff698979-3e20-4b13-9cae-2b0d353cae40"). InnerVolumeSpecName "kube-api-access-2jqzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.938188 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff698979-3e20-4b13-9cae-2b0d353cae40" (UID: "ff698979-3e20-4b13-9cae-2b0d353cae40"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.946653 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-inventory" (OuterVolumeSpecName: "inventory") pod "ff698979-3e20-4b13-9cae-2b0d353cae40" (UID: "ff698979-3e20-4b13-9cae-2b0d353cae40"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.948072 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ff698979-3e20-4b13-9cae-2b0d353cae40" (UID: "ff698979-3e20-4b13-9cae-2b0d353cae40"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.994195 4897 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.994265 4897 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.994282 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.994297 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jqzw\" (UniqueName: \"kubernetes.io/projected/ff698979-3e20-4b13-9cae-2b0d353cae40-kube-api-access-2jqzw\") on node \"crc\" DevicePath \"\"" Feb 28 14:03:41 crc kubenswrapper[4897]: I0228 14:03:41.994338 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff698979-3e20-4b13-9cae-2b0d353cae40-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.258180 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" event={"ID":"ff698979-3e20-4b13-9cae-2b0d353cae40","Type":"ContainerDied","Data":"17597fd44110a1932146b771085bdcf32bb8a8180f1d7b3c03de4dc36d8bce45"} Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.258230 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17597fd44110a1932146b771085bdcf32bb8a8180f1d7b3c03de4dc36d8bce45" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.258695 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-497mc" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.396717 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724"] Feb 28 14:03:42 crc kubenswrapper[4897]: E0228 14:03:42.397453 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="registry-server" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.397547 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="registry-server" Feb 28 14:03:42 crc kubenswrapper[4897]: E0228 14:03:42.397680 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff698979-3e20-4b13-9cae-2b0d353cae40" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.397763 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff698979-3e20-4b13-9cae-2b0d353cae40" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 14:03:42 crc kubenswrapper[4897]: E0228 14:03:42.397849 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="extract-content" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.397923 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="extract-content" Feb 28 14:03:42 crc kubenswrapper[4897]: E0228 14:03:42.398009 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="extract-utilities" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.398074 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="extract-utilities" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.398382 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff698979-3e20-4b13-9cae-2b0d353cae40" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.398491 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e1ea28-bbdc-427c-98f6-41bf30ecf060" containerName="registry-server" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.399426 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.405420 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.405628 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.405733 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.405773 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.406066 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.406099 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.406190 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.430387 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724"] Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.507966 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508124 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508181 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzk2k\" (UniqueName: \"kubernetes.io/projected/1fc98763-e64a-41e1-a4ff-0c72faa961fe-kube-api-access-qzk2k\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508227 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508267 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508405 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508455 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508513 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508785 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.508938 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.509019 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610655 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610718 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610754 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610820 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610864 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610881 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzk2k\" (UniqueName: \"kubernetes.io/projected/1fc98763-e64a-41e1-a4ff-0c72faa961fe-kube-api-access-qzk2k\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610904 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610943 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610959 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.610999 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.612533 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.616507 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.616705 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.619181 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.620568 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.621396 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.621716 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.628257 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.628347 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.628664 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.631475 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzk2k\" (UniqueName: \"kubernetes.io/projected/1fc98763-e64a-41e1-a4ff-0c72faa961fe-kube-api-access-qzk2k\") pod \"nova-edpm-deployment-openstack-edpm-ipam-ls724\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:42 crc kubenswrapper[4897]: I0228 14:03:42.731848 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:03:43 crc kubenswrapper[4897]: W0228 14:03:43.367142 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fc98763_e64a_41e1_a4ff_0c72faa961fe.slice/crio-5ee389dd3c4a2541751de33c67ea9c3790823aa408f595951b24b4b13d85a9e3 WatchSource:0}: Error finding container 5ee389dd3c4a2541751de33c67ea9c3790823aa408f595951b24b4b13d85a9e3: Status 404 returned error can't find the container with id 5ee389dd3c4a2541751de33c67ea9c3790823aa408f595951b24b4b13d85a9e3 Feb 28 14:03:43 crc kubenswrapper[4897]: I0228 14:03:43.367852 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724"] Feb 28 14:03:44 crc kubenswrapper[4897]: I0228 14:03:44.283801 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" event={"ID":"1fc98763-e64a-41e1-a4ff-0c72faa961fe","Type":"ContainerStarted","Data":"120043b24cf10d52306e419c0d874b3357ab4666715ab9edc78ec480c69f8b82"} Feb 28 14:03:44 crc kubenswrapper[4897]: I0228 14:03:44.284095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" event={"ID":"1fc98763-e64a-41e1-a4ff-0c72faa961fe","Type":"ContainerStarted","Data":"5ee389dd3c4a2541751de33c67ea9c3790823aa408f595951b24b4b13d85a9e3"} Feb 28 14:03:44 crc kubenswrapper[4897]: I0228 14:03:44.322444 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" podStartSLOduration=1.835850401 podStartE2EDuration="2.322416228s" podCreationTimestamp="2026-02-28 14:03:42 +0000 UTC" firstStartedPulling="2026-02-28 14:03:43.369834911 +0000 UTC m=+2837.612155568" lastFinishedPulling="2026-02-28 14:03:43.856400708 +0000 UTC m=+2838.098721395" observedRunningTime="2026-02-28 14:03:44.311446385 +0000 UTC m=+2838.553767092" watchObservedRunningTime="2026-02-28 14:03:44.322416228 +0000 UTC m=+2838.564736915" Feb 28 14:03:52 crc kubenswrapper[4897]: E0228 14:03:52.460475 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.158714 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538124-s5qrp"] Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.161027 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538124-s5qrp" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.163499 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.163652 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.165521 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.171740 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538124-s5qrp"] Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.229613 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65j2d\" (UniqueName: \"kubernetes.io/projected/f6297be3-6c7c-40c6-823e-ab3e4233cd7d-kube-api-access-65j2d\") pod \"auto-csr-approver-29538124-s5qrp\" (UID: \"f6297be3-6c7c-40c6-823e-ab3e4233cd7d\") " pod="openshift-infra/auto-csr-approver-29538124-s5qrp" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.332474 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65j2d\" (UniqueName: \"kubernetes.io/projected/f6297be3-6c7c-40c6-823e-ab3e4233cd7d-kube-api-access-65j2d\") pod \"auto-csr-approver-29538124-s5qrp\" (UID: \"f6297be3-6c7c-40c6-823e-ab3e4233cd7d\") " pod="openshift-infra/auto-csr-approver-29538124-s5qrp" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.362476 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65j2d\" (UniqueName: \"kubernetes.io/projected/f6297be3-6c7c-40c6-823e-ab3e4233cd7d-kube-api-access-65j2d\") pod \"auto-csr-approver-29538124-s5qrp\" (UID: \"f6297be3-6c7c-40c6-823e-ab3e4233cd7d\") " pod="openshift-infra/auto-csr-approver-29538124-s5qrp" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.482146 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538124-s5qrp" Feb 28 14:04:00 crc kubenswrapper[4897]: I0228 14:04:00.977831 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538124-s5qrp"] Feb 28 14:04:00 crc kubenswrapper[4897]: W0228 14:04:00.986836 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6297be3_6c7c_40c6_823e_ab3e4233cd7d.slice/crio-b5f33d754717be155e191a859c490a3836913ab6229520cff669958868c578f6 WatchSource:0}: Error finding container b5f33d754717be155e191a859c490a3836913ab6229520cff669958868c578f6: Status 404 returned error can't find the container with id b5f33d754717be155e191a859c490a3836913ab6229520cff669958868c578f6 Feb 28 14:04:01 crc kubenswrapper[4897]: I0228 14:04:01.988791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538124-s5qrp" event={"ID":"f6297be3-6c7c-40c6-823e-ab3e4233cd7d","Type":"ContainerStarted","Data":"b5f33d754717be155e191a859c490a3836913ab6229520cff669958868c578f6"} Feb 28 14:04:03 crc kubenswrapper[4897]: I0228 14:04:03.002949 4897 generic.go:334] "Generic (PLEG): container finished" podID="f6297be3-6c7c-40c6-823e-ab3e4233cd7d" containerID="1a1a87745437860ac1e22bf1d7217912ac33c81bb6ffb45b8ddceecce6c970d2" exitCode=0 Feb 28 14:04:03 crc kubenswrapper[4897]: I0228 14:04:03.003033 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538124-s5qrp" event={"ID":"f6297be3-6c7c-40c6-823e-ab3e4233cd7d","Type":"ContainerDied","Data":"1a1a87745437860ac1e22bf1d7217912ac33c81bb6ffb45b8ddceecce6c970d2"} Feb 28 14:04:04 crc kubenswrapper[4897]: I0228 14:04:04.482247 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538124-s5qrp" Feb 28 14:04:04 crc kubenswrapper[4897]: I0228 14:04:04.619934 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65j2d\" (UniqueName: \"kubernetes.io/projected/f6297be3-6c7c-40c6-823e-ab3e4233cd7d-kube-api-access-65j2d\") pod \"f6297be3-6c7c-40c6-823e-ab3e4233cd7d\" (UID: \"f6297be3-6c7c-40c6-823e-ab3e4233cd7d\") " Feb 28 14:04:04 crc kubenswrapper[4897]: I0228 14:04:04.625921 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6297be3-6c7c-40c6-823e-ab3e4233cd7d-kube-api-access-65j2d" (OuterVolumeSpecName: "kube-api-access-65j2d") pod "f6297be3-6c7c-40c6-823e-ab3e4233cd7d" (UID: "f6297be3-6c7c-40c6-823e-ab3e4233cd7d"). InnerVolumeSpecName "kube-api-access-65j2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:04:04 crc kubenswrapper[4897]: I0228 14:04:04.723284 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65j2d\" (UniqueName: \"kubernetes.io/projected/f6297be3-6c7c-40c6-823e-ab3e4233cd7d-kube-api-access-65j2d\") on node \"crc\" DevicePath \"\"" Feb 28 14:04:05 crc kubenswrapper[4897]: I0228 14:04:05.031635 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmq64" event={"ID":"35894d31-dc84-4b14-9a5a-08e0bc50ea11","Type":"ContainerStarted","Data":"eecbab544b99670a28fbc23563955f83e547b1af40415cf03fe81c717c036dfa"} Feb 28 14:04:05 crc kubenswrapper[4897]: I0228 14:04:05.034543 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538124-s5qrp" event={"ID":"f6297be3-6c7c-40c6-823e-ab3e4233cd7d","Type":"ContainerDied","Data":"b5f33d754717be155e191a859c490a3836913ab6229520cff669958868c578f6"} Feb 28 14:04:05 crc kubenswrapper[4897]: I0228 14:04:05.034590 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f33d754717be155e191a859c490a3836913ab6229520cff669958868c578f6" Feb 28 14:04:05 crc kubenswrapper[4897]: I0228 14:04:05.034673 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538124-s5qrp" Feb 28 14:04:05 crc kubenswrapper[4897]: I0228 14:04:05.604592 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538118-p7rgq"] Feb 28 14:04:05 crc kubenswrapper[4897]: I0228 14:04:05.615460 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538118-p7rgq"] Feb 28 14:04:06 crc kubenswrapper[4897]: I0228 14:04:06.050530 4897 generic.go:334] "Generic (PLEG): container finished" podID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerID="eecbab544b99670a28fbc23563955f83e547b1af40415cf03fe81c717c036dfa" exitCode=0 Feb 28 14:04:06 crc kubenswrapper[4897]: I0228 14:04:06.050628 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmq64" event={"ID":"35894d31-dc84-4b14-9a5a-08e0bc50ea11","Type":"ContainerDied","Data":"eecbab544b99670a28fbc23563955f83e547b1af40415cf03fe81c717c036dfa"} Feb 28 14:04:06 crc kubenswrapper[4897]: I0228 14:04:06.480062 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe6b94f-0c7a-4e12-8e80-36f817c2063b" path="/var/lib/kubelet/pods/4fe6b94f-0c7a-4e12-8e80-36f817c2063b/volumes" Feb 28 14:04:07 crc kubenswrapper[4897]: I0228 14:04:07.065045 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmq64" event={"ID":"35894d31-dc84-4b14-9a5a-08e0bc50ea11","Type":"ContainerStarted","Data":"6a8db35ae3933930f29fd1c59cba975cfd3d459a94ecbcca05d678bafdea7054"} Feb 28 14:04:10 crc kubenswrapper[4897]: I0228 14:04:10.290949 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:04:10 crc kubenswrapper[4897]: I0228 14:04:10.291620 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:04:10 crc kubenswrapper[4897]: I0228 14:04:10.387103 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:04:10 crc kubenswrapper[4897]: I0228 14:04:10.416739 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qmq64" podStartSLOduration=6.337002611 podStartE2EDuration="1m41.416714982s" podCreationTimestamp="2026-02-28 14:02:29 +0000 UTC" firstStartedPulling="2026-02-28 14:02:31.43855734 +0000 UTC m=+2765.680878037" lastFinishedPulling="2026-02-28 14:04:06.518269711 +0000 UTC m=+2860.760590408" observedRunningTime="2026-02-28 14:04:07.09222156 +0000 UTC m=+2861.334542227" watchObservedRunningTime="2026-02-28 14:04:10.416714982 +0000 UTC m=+2864.659035659" Feb 28 14:04:11 crc kubenswrapper[4897]: I0228 14:04:11.178864 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:04:11 crc kubenswrapper[4897]: I0228 14:04:11.234796 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qmq64"] Feb 28 14:04:13 crc kubenswrapper[4897]: I0228 14:04:13.131762 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qmq64" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerName="registry-server" containerID="cri-o://6a8db35ae3933930f29fd1c59cba975cfd3d459a94ecbcca05d678bafdea7054" gracePeriod=2 Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.145994 4897 generic.go:334] "Generic (PLEG): container finished" podID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerID="6a8db35ae3933930f29fd1c59cba975cfd3d459a94ecbcca05d678bafdea7054" exitCode=0 Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.146321 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmq64" event={"ID":"35894d31-dc84-4b14-9a5a-08e0bc50ea11","Type":"ContainerDied","Data":"6a8db35ae3933930f29fd1c59cba975cfd3d459a94ecbcca05d678bafdea7054"} Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.146423 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmq64" event={"ID":"35894d31-dc84-4b14-9a5a-08e0bc50ea11","Type":"ContainerDied","Data":"133df5054e1cbe553935df7091451be01ccc1747ceedcc21811c65947562450d"} Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.146445 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133df5054e1cbe553935df7091451be01ccc1747ceedcc21811c65947562450d" Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.210996 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.336547 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-catalog-content\") pod \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.336613 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-utilities\") pod \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.336714 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t994g\" (UniqueName: \"kubernetes.io/projected/35894d31-dc84-4b14-9a5a-08e0bc50ea11-kube-api-access-t994g\") pod \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\" (UID: \"35894d31-dc84-4b14-9a5a-08e0bc50ea11\") " Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.337477 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-utilities" (OuterVolumeSpecName: "utilities") pod "35894d31-dc84-4b14-9a5a-08e0bc50ea11" (UID: "35894d31-dc84-4b14-9a5a-08e0bc50ea11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.342399 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35894d31-dc84-4b14-9a5a-08e0bc50ea11-kube-api-access-t994g" (OuterVolumeSpecName: "kube-api-access-t994g") pod "35894d31-dc84-4b14-9a5a-08e0bc50ea11" (UID: "35894d31-dc84-4b14-9a5a-08e0bc50ea11"). InnerVolumeSpecName "kube-api-access-t994g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.391996 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35894d31-dc84-4b14-9a5a-08e0bc50ea11" (UID: "35894d31-dc84-4b14-9a5a-08e0bc50ea11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.440362 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t994g\" (UniqueName: \"kubernetes.io/projected/35894d31-dc84-4b14-9a5a-08e0bc50ea11-kube-api-access-t994g\") on node \"crc\" DevicePath \"\"" Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.440431 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:04:14 crc kubenswrapper[4897]: I0228 14:04:14.440453 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35894d31-dc84-4b14-9a5a-08e0bc50ea11-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:04:15 crc kubenswrapper[4897]: I0228 14:04:15.158195 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmq64" Feb 28 14:04:15 crc kubenswrapper[4897]: I0228 14:04:15.192165 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qmq64"] Feb 28 14:04:15 crc kubenswrapper[4897]: I0228 14:04:15.206810 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qmq64"] Feb 28 14:04:16 crc kubenswrapper[4897]: I0228 14:04:16.487182 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" path="/var/lib/kubelet/pods/35894d31-dc84-4b14-9a5a-08e0bc50ea11/volumes" Feb 28 14:04:46 crc kubenswrapper[4897]: I0228 14:04:46.948691 4897 scope.go:117] "RemoveContainer" containerID="bbfd4c048c57b7c8a87ca20f062dc93c7e3959adcac959b20578b1df4cb9b8ff" Feb 28 14:05:03 crc kubenswrapper[4897]: I0228 14:05:03.371420 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:05:03 crc kubenswrapper[4897]: I0228 14:05:03.372204 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:05:33 crc kubenswrapper[4897]: I0228 14:05:33.370840 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:05:33 crc kubenswrapper[4897]: I0228 14:05:33.371468 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.059586 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fb4zm"] Feb 28 14:05:51 crc kubenswrapper[4897]: E0228 14:05:51.061595 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerName="registry-server" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.061801 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerName="registry-server" Feb 28 14:05:51 crc kubenswrapper[4897]: E0228 14:05:51.061831 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerName="extract-utilities" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.061966 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerName="extract-utilities" Feb 28 14:05:51 crc kubenswrapper[4897]: E0228 14:05:51.061994 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerName="extract-content" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.062002 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerName="extract-content" Feb 28 14:05:51 crc kubenswrapper[4897]: E0228 14:05:51.062122 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6297be3-6c7c-40c6-823e-ab3e4233cd7d" containerName="oc" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.062131 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6297be3-6c7c-40c6-823e-ab3e4233cd7d" containerName="oc" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.063420 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6297be3-6c7c-40c6-823e-ab3e4233cd7d" containerName="oc" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.063445 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="35894d31-dc84-4b14-9a5a-08e0bc50ea11" containerName="registry-server" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.065160 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.077949 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fb4zm"] Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.152396 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-catalog-content\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.152492 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-utilities\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.152685 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj9ht\" (UniqueName: \"kubernetes.io/projected/a44271b7-1430-4d6d-a551-8b66131ac38a-kube-api-access-kj9ht\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.255155 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj9ht\" (UniqueName: \"kubernetes.io/projected/a44271b7-1430-4d6d-a551-8b66131ac38a-kube-api-access-kj9ht\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.255284 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-catalog-content\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.255346 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-utilities\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.255902 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-catalog-content\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.255913 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-utilities\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.275879 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj9ht\" (UniqueName: \"kubernetes.io/projected/a44271b7-1430-4d6d-a551-8b66131ac38a-kube-api-access-kj9ht\") pod \"certified-operators-fb4zm\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.399761 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:05:51 crc kubenswrapper[4897]: I0228 14:05:51.963864 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fb4zm"] Feb 28 14:05:52 crc kubenswrapper[4897]: I0228 14:05:52.342851 4897 generic.go:334] "Generic (PLEG): container finished" podID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerID="13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470" exitCode=0 Feb 28 14:05:52 crc kubenswrapper[4897]: I0228 14:05:52.342934 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb4zm" event={"ID":"a44271b7-1430-4d6d-a551-8b66131ac38a","Type":"ContainerDied","Data":"13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470"} Feb 28 14:05:52 crc kubenswrapper[4897]: I0228 14:05:52.343333 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb4zm" event={"ID":"a44271b7-1430-4d6d-a551-8b66131ac38a","Type":"ContainerStarted","Data":"83f1dfc62240b4897564f33cec754e66e9eec6b0df73ba590960874470ac8488"} Feb 28 14:05:54 crc kubenswrapper[4897]: I0228 14:05:54.368481 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb4zm" event={"ID":"a44271b7-1430-4d6d-a551-8b66131ac38a","Type":"ContainerStarted","Data":"32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f"} Feb 28 14:05:56 crc kubenswrapper[4897]: I0228 14:05:56.395587 4897 generic.go:334] "Generic (PLEG): container finished" podID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerID="32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f" exitCode=0 Feb 28 14:05:56 crc kubenswrapper[4897]: I0228 14:05:56.395644 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb4zm" event={"ID":"a44271b7-1430-4d6d-a551-8b66131ac38a","Type":"ContainerDied","Data":"32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f"} Feb 28 14:05:57 crc kubenswrapper[4897]: I0228 14:05:57.416030 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb4zm" event={"ID":"a44271b7-1430-4d6d-a551-8b66131ac38a","Type":"ContainerStarted","Data":"fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b"} Feb 28 14:05:57 crc kubenswrapper[4897]: I0228 14:05:57.448775 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fb4zm" podStartSLOduration=2.953419867 podStartE2EDuration="7.44875378s" podCreationTimestamp="2026-02-28 14:05:50 +0000 UTC" firstStartedPulling="2026-02-28 14:05:52.345018226 +0000 UTC m=+2966.587338923" lastFinishedPulling="2026-02-28 14:05:56.840352169 +0000 UTC m=+2971.082672836" observedRunningTime="2026-02-28 14:05:57.441202304 +0000 UTC m=+2971.683522991" watchObservedRunningTime="2026-02-28 14:05:57.44875378 +0000 UTC m=+2971.691074447" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.159796 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538126-tjh6m"] Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.162585 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.164949 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.165013 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.168819 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.169801 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538126-tjh6m"] Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.246097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mwz\" (UniqueName: \"kubernetes.io/projected/60495189-1bea-4438-ad69-f56dd7caa7ac-kube-api-access-p6mwz\") pod \"auto-csr-approver-29538126-tjh6m\" (UID: \"60495189-1bea-4438-ad69-f56dd7caa7ac\") " pod="openshift-infra/auto-csr-approver-29538126-tjh6m" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.348049 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mwz\" (UniqueName: \"kubernetes.io/projected/60495189-1bea-4438-ad69-f56dd7caa7ac-kube-api-access-p6mwz\") pod \"auto-csr-approver-29538126-tjh6m\" (UID: \"60495189-1bea-4438-ad69-f56dd7caa7ac\") " pod="openshift-infra/auto-csr-approver-29538126-tjh6m" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.370647 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mwz\" (UniqueName: \"kubernetes.io/projected/60495189-1bea-4438-ad69-f56dd7caa7ac-kube-api-access-p6mwz\") pod \"auto-csr-approver-29538126-tjh6m\" (UID: \"60495189-1bea-4438-ad69-f56dd7caa7ac\") " pod="openshift-infra/auto-csr-approver-29538126-tjh6m" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.483008 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" Feb 28 14:06:00 crc kubenswrapper[4897]: I0228 14:06:00.963037 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538126-tjh6m"] Feb 28 14:06:00 crc kubenswrapper[4897]: W0228 14:06:00.965861 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60495189_1bea_4438_ad69_f56dd7caa7ac.slice/crio-31f0163550eb3d955f168345fd822c427bf53e004192fe6d1f9a657ccf69b599 WatchSource:0}: Error finding container 31f0163550eb3d955f168345fd822c427bf53e004192fe6d1f9a657ccf69b599: Status 404 returned error can't find the container with id 31f0163550eb3d955f168345fd822c427bf53e004192fe6d1f9a657ccf69b599 Feb 28 14:06:01 crc kubenswrapper[4897]: I0228 14:06:01.400423 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:06:01 crc kubenswrapper[4897]: I0228 14:06:01.400797 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:06:01 crc kubenswrapper[4897]: I0228 14:06:01.463851 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" event={"ID":"60495189-1bea-4438-ad69-f56dd7caa7ac","Type":"ContainerStarted","Data":"31f0163550eb3d955f168345fd822c427bf53e004192fe6d1f9a657ccf69b599"} Feb 28 14:06:01 crc kubenswrapper[4897]: I0228 14:06:01.470599 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:06:03 crc kubenswrapper[4897]: I0228 14:06:03.370983 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:06:03 crc kubenswrapper[4897]: I0228 14:06:03.371415 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:06:03 crc kubenswrapper[4897]: I0228 14:06:03.371496 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:06:03 crc kubenswrapper[4897]: I0228 14:06:03.372622 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:06:03 crc kubenswrapper[4897]: I0228 14:06:03.372728 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" gracePeriod=600 Feb 28 14:06:03 crc kubenswrapper[4897]: E0228 14:06:03.503572 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:06:04 crc kubenswrapper[4897]: I0228 14:06:04.506652 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" exitCode=0 Feb 28 14:06:04 crc kubenswrapper[4897]: I0228 14:06:04.506720 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2"} Feb 28 14:06:04 crc kubenswrapper[4897]: I0228 14:06:04.506776 4897 scope.go:117] "RemoveContainer" containerID="302c29decadd860cf90d5f15ef4f4562333e667b99af9bcb674b496f4e17ed16" Feb 28 14:06:04 crc kubenswrapper[4897]: I0228 14:06:04.507701 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:06:04 crc kubenswrapper[4897]: E0228 14:06:04.508221 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:06:11 crc kubenswrapper[4897]: I0228 14:06:11.460036 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:06:11 crc kubenswrapper[4897]: I0228 14:06:11.514450 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fb4zm"] Feb 28 14:06:11 crc kubenswrapper[4897]: I0228 14:06:11.609170 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fb4zm" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerName="registry-server" containerID="cri-o://fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b" gracePeriod=2 Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.114097 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.122285 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj9ht\" (UniqueName: \"kubernetes.io/projected/a44271b7-1430-4d6d-a551-8b66131ac38a-kube-api-access-kj9ht\") pod \"a44271b7-1430-4d6d-a551-8b66131ac38a\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.122362 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-utilities\") pod \"a44271b7-1430-4d6d-a551-8b66131ac38a\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.122469 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-catalog-content\") pod \"a44271b7-1430-4d6d-a551-8b66131ac38a\" (UID: \"a44271b7-1430-4d6d-a551-8b66131ac38a\") " Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.123604 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-utilities" (OuterVolumeSpecName: "utilities") pod "a44271b7-1430-4d6d-a551-8b66131ac38a" (UID: "a44271b7-1430-4d6d-a551-8b66131ac38a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.136803 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44271b7-1430-4d6d-a551-8b66131ac38a-kube-api-access-kj9ht" (OuterVolumeSpecName: "kube-api-access-kj9ht") pod "a44271b7-1430-4d6d-a551-8b66131ac38a" (UID: "a44271b7-1430-4d6d-a551-8b66131ac38a"). InnerVolumeSpecName "kube-api-access-kj9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.203400 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a44271b7-1430-4d6d-a551-8b66131ac38a" (UID: "a44271b7-1430-4d6d-a551-8b66131ac38a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.224068 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj9ht\" (UniqueName: \"kubernetes.io/projected/a44271b7-1430-4d6d-a551-8b66131ac38a-kube-api-access-kj9ht\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.224099 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.224109 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44271b7-1430-4d6d-a551-8b66131ac38a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.621814 4897 generic.go:334] "Generic (PLEG): container finished" podID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerID="fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b" exitCode=0 Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.621893 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fb4zm" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.621964 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb4zm" event={"ID":"a44271b7-1430-4d6d-a551-8b66131ac38a","Type":"ContainerDied","Data":"fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b"} Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.623008 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb4zm" event={"ID":"a44271b7-1430-4d6d-a551-8b66131ac38a","Type":"ContainerDied","Data":"83f1dfc62240b4897564f33cec754e66e9eec6b0df73ba590960874470ac8488"} Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.623048 4897 scope.go:117] "RemoveContainer" containerID="fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.652981 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fb4zm"] Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.659961 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fb4zm"] Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.663166 4897 scope.go:117] "RemoveContainer" containerID="32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.702575 4897 scope.go:117] "RemoveContainer" containerID="13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.760029 4897 scope.go:117] "RemoveContainer" containerID="fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b" Feb 28 14:06:12 crc kubenswrapper[4897]: E0228 14:06:12.763031 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b\": container with ID starting with fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b not found: ID does not exist" containerID="fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.763067 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b"} err="failed to get container status \"fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b\": rpc error: code = NotFound desc = could not find container \"fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b\": container with ID starting with fffa43aac581bccf2408668376309b94b642fade3c1a98933f2d3e7f0f66f85b not found: ID does not exist" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.763092 4897 scope.go:117] "RemoveContainer" containerID="32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f" Feb 28 14:06:12 crc kubenswrapper[4897]: E0228 14:06:12.763615 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f\": container with ID starting with 32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f not found: ID does not exist" containerID="32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.763809 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f"} err="failed to get container status \"32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f\": rpc error: code = NotFound desc = could not find container \"32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f\": container with ID starting with 32716f4dd43ed7ba81c4b160ae53c5f1de9dc2e5d535f61b2ff9e4f9dc28e81f not found: ID does not exist" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.763955 4897 scope.go:117] "RemoveContainer" containerID="13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470" Feb 28 14:06:12 crc kubenswrapper[4897]: E0228 14:06:12.764651 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470\": container with ID starting with 13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470 not found: ID does not exist" containerID="13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470" Feb 28 14:06:12 crc kubenswrapper[4897]: I0228 14:06:12.764803 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470"} err="failed to get container status \"13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470\": rpc error: code = NotFound desc = could not find container \"13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470\": container with ID starting with 13c94cc413e11d0e37ec037d8543938ca931388afd8fb69accc307d62a567470 not found: ID does not exist" Feb 28 14:06:14 crc kubenswrapper[4897]: I0228 14:06:14.480683 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" path="/var/lib/kubelet/pods/a44271b7-1430-4d6d-a551-8b66131ac38a/volumes" Feb 28 14:06:19 crc kubenswrapper[4897]: I0228 14:06:19.456180 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:06:19 crc kubenswrapper[4897]: E0228 14:06:19.456978 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:06:23 crc kubenswrapper[4897]: I0228 14:06:23.743533 4897 generic.go:334] "Generic (PLEG): container finished" podID="1fc98763-e64a-41e1-a4ff-0c72faa961fe" containerID="120043b24cf10d52306e419c0d874b3357ab4666715ab9edc78ec480c69f8b82" exitCode=0 Feb 28 14:06:23 crc kubenswrapper[4897]: I0228 14:06:23.743592 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" event={"ID":"1fc98763-e64a-41e1-a4ff-0c72faa961fe","Type":"ContainerDied","Data":"120043b24cf10d52306e419c0d874b3357ab4666715ab9edc78ec480c69f8b82"} Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.254143 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.313297 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-0\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.313443 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-inventory\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.313481 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-2\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.313516 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-combined-ca-bundle\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.313568 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-1\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.313673 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-3\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.313843 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-0\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.313933 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-ssh-key-openstack-edpm-ipam\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.314189 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzk2k\" (UniqueName: \"kubernetes.io/projected/1fc98763-e64a-41e1-a4ff-0c72faa961fe-kube-api-access-qzk2k\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.314376 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-extra-config-0\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.314605 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-1\") pod \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\" (UID: \"1fc98763-e64a-41e1-a4ff-0c72faa961fe\") " Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.328871 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc98763-e64a-41e1-a4ff-0c72faa961fe-kube-api-access-qzk2k" (OuterVolumeSpecName: "kube-api-access-qzk2k") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "kube-api-access-qzk2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.351541 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.363532 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-inventory" (OuterVolumeSpecName: "inventory") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.371193 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.376949 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.386087 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.387046 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.388600 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.390387 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.400929 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.400935 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "1fc98763-e64a-41e1-a4ff-0c72faa961fe" (UID: "1fc98763-e64a-41e1-a4ff-0c72faa961fe"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417467 4897 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417501 4897 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417514 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417527 4897 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417539 4897 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417550 4897 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417564 4897 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417575 4897 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417586 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fc98763-e64a-41e1-a4ff-0c72faa961fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417598 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzk2k\" (UniqueName: \"kubernetes.io/projected/1fc98763-e64a-41e1-a4ff-0c72faa961fe-kube-api-access-qzk2k\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.417609 4897 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1fc98763-e64a-41e1-a4ff-0c72faa961fe-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.776014 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" event={"ID":"1fc98763-e64a-41e1-a4ff-0c72faa961fe","Type":"ContainerDied","Data":"5ee389dd3c4a2541751de33c67ea9c3790823aa408f595951b24b4b13d85a9e3"} Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.776054 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee389dd3c4a2541751de33c67ea9c3790823aa408f595951b24b4b13d85a9e3" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.776108 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-ls724" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.892285 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb"] Feb 28 14:06:25 crc kubenswrapper[4897]: E0228 14:06:25.893002 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerName="extract-content" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.893152 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerName="extract-content" Feb 28 14:06:25 crc kubenswrapper[4897]: E0228 14:06:25.893271 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerName="registry-server" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.893360 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerName="registry-server" Feb 28 14:06:25 crc kubenswrapper[4897]: E0228 14:06:25.893446 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc98763-e64a-41e1-a4ff-0c72faa961fe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.893528 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc98763-e64a-41e1-a4ff-0c72faa961fe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 14:06:25 crc kubenswrapper[4897]: E0228 14:06:25.893605 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerName="extract-utilities" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.893676 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerName="extract-utilities" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.893989 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc98763-e64a-41e1-a4ff-0c72faa961fe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.894081 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44271b7-1430-4d6d-a551-8b66131ac38a" containerName="registry-server" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.894967 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.896989 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.898019 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.898792 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.899863 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhs8l" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.900114 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 14:06:25 crc kubenswrapper[4897]: I0228 14:06:25.928790 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb"] Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.030900 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.031001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.031073 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.031113 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.031144 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.031175 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.031229 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc4tk\" (UniqueName: \"kubernetes.io/projected/8356fe56-9405-43be-8d6e-3d71c9906864-kube-api-access-bc4tk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.133239 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.133524 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.133695 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.133825 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.133956 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.134115 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc4tk\" (UniqueName: \"kubernetes.io/projected/8356fe56-9405-43be-8d6e-3d71c9906864-kube-api-access-bc4tk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.134242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.138391 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.138479 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.139043 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.139298 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.148934 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.151273 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.157452 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc4tk\" (UniqueName: \"kubernetes.io/projected/8356fe56-9405-43be-8d6e-3d71c9906864-kube-api-access-bc4tk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.215019 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:06:26 crc kubenswrapper[4897]: I0228 14:06:26.814913 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb"] Feb 28 14:06:27 crc kubenswrapper[4897]: I0228 14:06:27.216294 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 14:06:27 crc kubenswrapper[4897]: I0228 14:06:27.797161 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" event={"ID":"8356fe56-9405-43be-8d6e-3d71c9906864","Type":"ContainerStarted","Data":"b90e8d844bfd7ebbf9f5daed290b161111f2ab6486043d5243a5290bf4b73096"} Feb 28 14:06:27 crc kubenswrapper[4897]: I0228 14:06:27.797215 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" event={"ID":"8356fe56-9405-43be-8d6e-3d71c9906864","Type":"ContainerStarted","Data":"467328c74c25084a11bb6644ca6dda34c8dc35625173f187bc5ad1ba4fe0a32d"} Feb 28 14:06:27 crc kubenswrapper[4897]: I0228 14:06:27.821974 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" podStartSLOduration=2.42696429 podStartE2EDuration="2.821957487s" podCreationTimestamp="2026-02-28 14:06:25 +0000 UTC" firstStartedPulling="2026-02-28 14:06:26.81809 +0000 UTC m=+3001.060410657" lastFinishedPulling="2026-02-28 14:06:27.213083167 +0000 UTC m=+3001.455403854" observedRunningTime="2026-02-28 14:06:27.817937473 +0000 UTC m=+3002.060258130" watchObservedRunningTime="2026-02-28 14:06:27.821957487 +0000 UTC m=+3002.064278144" Feb 28 14:06:30 crc kubenswrapper[4897]: I0228 14:06:30.456252 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:06:30 crc kubenswrapper[4897]: E0228 14:06:30.456814 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:06:43 crc kubenswrapper[4897]: I0228 14:06:43.456592 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:06:43 crc kubenswrapper[4897]: E0228 14:06:43.458491 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:06:58 crc kubenswrapper[4897]: I0228 14:06:58.456928 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:06:58 crc kubenswrapper[4897]: E0228 14:06:58.457806 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:07:02 crc kubenswrapper[4897]: E0228 14:07:02.245558 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:07:02 crc kubenswrapper[4897]: E0228 14:07:02.246147 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:07:02 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:07:02 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6mwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538126-tjh6m_openshift-infra(60495189-1bea-4438-ad69-f56dd7caa7ac): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:07:02 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:07:02 crc kubenswrapper[4897]: E0228 14:07:02.247363 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" Feb 28 14:07:03 crc kubenswrapper[4897]: E0228 14:07:03.254022 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" Feb 28 14:07:09 crc kubenswrapper[4897]: I0228 14:07:09.457122 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:07:09 crc kubenswrapper[4897]: E0228 14:07:09.459027 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:07:16 crc kubenswrapper[4897]: I0228 14:07:16.477548 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:07:17 crc kubenswrapper[4897]: E0228 14:07:17.348274 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:07:17 crc kubenswrapper[4897]: E0228 14:07:17.348833 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:07:17 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:07:17 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6mwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538126-tjh6m_openshift-infra(60495189-1bea-4438-ad69-f56dd7caa7ac): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:07:17 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:07:17 crc kubenswrapper[4897]: E0228 14:07:17.350745 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" Feb 28 14:07:21 crc kubenswrapper[4897]: I0228 14:07:21.455877 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:07:21 crc kubenswrapper[4897]: E0228 14:07:21.456736 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:07:29 crc kubenswrapper[4897]: E0228 14:07:29.461150 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" Feb 28 14:07:34 crc kubenswrapper[4897]: I0228 14:07:34.457100 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:07:34 crc kubenswrapper[4897]: E0228 14:07:34.458128 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:07:45 crc kubenswrapper[4897]: E0228 14:07:45.415976 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:07:45 crc kubenswrapper[4897]: E0228 14:07:45.418847 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:07:45 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:07:45 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6mwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538126-tjh6m_openshift-infra(60495189-1bea-4438-ad69-f56dd7caa7ac): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:07:45 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:07:45 crc kubenswrapper[4897]: E0228 14:07:45.420509 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" Feb 28 14:07:47 crc kubenswrapper[4897]: I0228 14:07:47.457109 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:07:47 crc kubenswrapper[4897]: E0228 14:07:47.458030 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:07:48 crc kubenswrapper[4897]: I0228 14:07:48.845653 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wh49x"] Feb 28 14:07:48 crc kubenswrapper[4897]: I0228 14:07:48.848793 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:48 crc kubenswrapper[4897]: I0228 14:07:48.867726 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh49x"] Feb 28 14:07:48 crc kubenswrapper[4897]: I0228 14:07:48.972299 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-utilities\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:48 crc kubenswrapper[4897]: I0228 14:07:48.972612 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q58x\" (UniqueName: \"kubernetes.io/projected/3eccae89-322c-4866-a4cb-044537aaa45a-kube-api-access-6q58x\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:48 crc kubenswrapper[4897]: I0228 14:07:48.972642 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-catalog-content\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:49 crc kubenswrapper[4897]: I0228 14:07:49.074715 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q58x\" (UniqueName: \"kubernetes.io/projected/3eccae89-322c-4866-a4cb-044537aaa45a-kube-api-access-6q58x\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:49 crc kubenswrapper[4897]: I0228 14:07:49.074775 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-catalog-content\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:49 crc kubenswrapper[4897]: I0228 14:07:49.074921 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-utilities\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:49 crc kubenswrapper[4897]: I0228 14:07:49.075319 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-catalog-content\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:49 crc kubenswrapper[4897]: I0228 14:07:49.075389 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-utilities\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:49 crc kubenswrapper[4897]: I0228 14:07:49.101988 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q58x\" (UniqueName: \"kubernetes.io/projected/3eccae89-322c-4866-a4cb-044537aaa45a-kube-api-access-6q58x\") pod \"redhat-marketplace-wh49x\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:49 crc kubenswrapper[4897]: I0228 14:07:49.182025 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:07:49 crc kubenswrapper[4897]: I0228 14:07:49.641012 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh49x"] Feb 28 14:07:50 crc kubenswrapper[4897]: I0228 14:07:50.424793 4897 generic.go:334] "Generic (PLEG): container finished" podID="3eccae89-322c-4866-a4cb-044537aaa45a" containerID="c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c" exitCode=0 Feb 28 14:07:50 crc kubenswrapper[4897]: I0228 14:07:50.424854 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh49x" event={"ID":"3eccae89-322c-4866-a4cb-044537aaa45a","Type":"ContainerDied","Data":"c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c"} Feb 28 14:07:50 crc kubenswrapper[4897]: I0228 14:07:50.425954 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh49x" event={"ID":"3eccae89-322c-4866-a4cb-044537aaa45a","Type":"ContainerStarted","Data":"99657566a42a49f97dd2c5c9f58b755ab69a990cbe8d3439043bc9a05966dc25"} Feb 28 14:07:50 crc kubenswrapper[4897]: E0228 14:07:50.999197 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:07:50 crc kubenswrapper[4897]: E0228 14:07:50.999539 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q58x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wh49x_openshift-marketplace(3eccae89-322c-4866-a4cb-044537aaa45a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:07:51 crc kubenswrapper[4897]: E0228 14:07:51.000870 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" Feb 28 14:07:51 crc kubenswrapper[4897]: E0228 14:07:51.439899 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" Feb 28 14:07:59 crc kubenswrapper[4897]: E0228 14:07:59.461658 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" Feb 28 14:08:00 crc kubenswrapper[4897]: I0228 14:08:00.171410 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538128-6p2kf"] Feb 28 14:08:00 crc kubenswrapper[4897]: I0228 14:08:00.174599 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538128-6p2kf" Feb 28 14:08:00 crc kubenswrapper[4897]: I0228 14:08:00.191561 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538128-6p2kf"] Feb 28 14:08:00 crc kubenswrapper[4897]: I0228 14:08:00.331562 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm629\" (UniqueName: \"kubernetes.io/projected/9c071db2-5764-4d17-a5cb-1f0f7f54c4fb-kube-api-access-tm629\") pod \"auto-csr-approver-29538128-6p2kf\" (UID: \"9c071db2-5764-4d17-a5cb-1f0f7f54c4fb\") " pod="openshift-infra/auto-csr-approver-29538128-6p2kf" Feb 28 14:08:00 crc kubenswrapper[4897]: I0228 14:08:00.433242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm629\" (UniqueName: \"kubernetes.io/projected/9c071db2-5764-4d17-a5cb-1f0f7f54c4fb-kube-api-access-tm629\") pod \"auto-csr-approver-29538128-6p2kf\" (UID: \"9c071db2-5764-4d17-a5cb-1f0f7f54c4fb\") " pod="openshift-infra/auto-csr-approver-29538128-6p2kf" Feb 28 14:08:00 crc kubenswrapper[4897]: I0228 14:08:00.505028 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm629\" (UniqueName: \"kubernetes.io/projected/9c071db2-5764-4d17-a5cb-1f0f7f54c4fb-kube-api-access-tm629\") pod \"auto-csr-approver-29538128-6p2kf\" (UID: \"9c071db2-5764-4d17-a5cb-1f0f7f54c4fb\") " pod="openshift-infra/auto-csr-approver-29538128-6p2kf" Feb 28 14:08:00 crc kubenswrapper[4897]: I0228 14:08:00.803782 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538128-6p2kf" Feb 28 14:08:01 crc kubenswrapper[4897]: I0228 14:08:01.143368 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538128-6p2kf"] Feb 28 14:08:01 crc kubenswrapper[4897]: I0228 14:08:01.457412 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:08:01 crc kubenswrapper[4897]: E0228 14:08:01.458044 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:08:01 crc kubenswrapper[4897]: I0228 14:08:01.583473 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538128-6p2kf" event={"ID":"9c071db2-5764-4d17-a5cb-1f0f7f54c4fb","Type":"ContainerStarted","Data":"447707be3ef9fd8deedbfd13c8a7f40cefe532b85d128c8281bf4a4ad0101d6a"} Feb 28 14:08:03 crc kubenswrapper[4897]: I0228 14:08:03.623257 4897 generic.go:334] "Generic (PLEG): container finished" podID="9c071db2-5764-4d17-a5cb-1f0f7f54c4fb" containerID="af106e6247fae6eaf4c646f0231c60ae49083f9e3c388a9176bf7fae4482142f" exitCode=0 Feb 28 14:08:03 crc kubenswrapper[4897]: I0228 14:08:03.623534 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538128-6p2kf" event={"ID":"9c071db2-5764-4d17-a5cb-1f0f7f54c4fb","Type":"ContainerDied","Data":"af106e6247fae6eaf4c646f0231c60ae49083f9e3c388a9176bf7fae4482142f"} Feb 28 14:08:04 crc kubenswrapper[4897]: E0228 14:08:04.021369 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:08:04 crc kubenswrapper[4897]: E0228 14:08:04.021572 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q58x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wh49x_openshift-marketplace(3eccae89-322c-4866-a4cb-044537aaa45a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:08:04 crc kubenswrapper[4897]: E0228 14:08:04.022818 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" Feb 28 14:08:05 crc kubenswrapper[4897]: I0228 14:08:05.074278 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538128-6p2kf" Feb 28 14:08:05 crc kubenswrapper[4897]: I0228 14:08:05.166180 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm629\" (UniqueName: \"kubernetes.io/projected/9c071db2-5764-4d17-a5cb-1f0f7f54c4fb-kube-api-access-tm629\") pod \"9c071db2-5764-4d17-a5cb-1f0f7f54c4fb\" (UID: \"9c071db2-5764-4d17-a5cb-1f0f7f54c4fb\") " Feb 28 14:08:05 crc kubenswrapper[4897]: I0228 14:08:05.171231 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c071db2-5764-4d17-a5cb-1f0f7f54c4fb-kube-api-access-tm629" (OuterVolumeSpecName: "kube-api-access-tm629") pod "9c071db2-5764-4d17-a5cb-1f0f7f54c4fb" (UID: "9c071db2-5764-4d17-a5cb-1f0f7f54c4fb"). InnerVolumeSpecName "kube-api-access-tm629". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:08:05 crc kubenswrapper[4897]: I0228 14:08:05.269974 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm629\" (UniqueName: \"kubernetes.io/projected/9c071db2-5764-4d17-a5cb-1f0f7f54c4fb-kube-api-access-tm629\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:05 crc kubenswrapper[4897]: I0228 14:08:05.653905 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538128-6p2kf" event={"ID":"9c071db2-5764-4d17-a5cb-1f0f7f54c4fb","Type":"ContainerDied","Data":"447707be3ef9fd8deedbfd13c8a7f40cefe532b85d128c8281bf4a4ad0101d6a"} Feb 28 14:08:05 crc kubenswrapper[4897]: I0228 14:08:05.654243 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="447707be3ef9fd8deedbfd13c8a7f40cefe532b85d128c8281bf4a4ad0101d6a" Feb 28 14:08:05 crc kubenswrapper[4897]: I0228 14:08:05.654013 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538128-6p2kf" Feb 28 14:08:06 crc kubenswrapper[4897]: I0228 14:08:06.170428 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538120-qpdgq"] Feb 28 14:08:06 crc kubenswrapper[4897]: I0228 14:08:06.185323 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538120-qpdgq"] Feb 28 14:08:06 crc kubenswrapper[4897]: I0228 14:08:06.477491 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d8f3a5c-cce6-4eea-b2de-293d5f8d9288" path="/var/lib/kubelet/pods/7d8f3a5c-cce6-4eea-b2de-293d5f8d9288/volumes" Feb 28 14:08:12 crc kubenswrapper[4897]: I0228 14:08:12.456961 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:08:12 crc kubenswrapper[4897]: E0228 14:08:12.459791 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:08:12 crc kubenswrapper[4897]: E0228 14:08:12.461630 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" Feb 28 14:08:16 crc kubenswrapper[4897]: E0228 14:08:16.484564 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" Feb 28 14:08:23 crc kubenswrapper[4897]: I0228 14:08:23.456853 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:08:23 crc kubenswrapper[4897]: E0228 14:08:23.457625 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:08:24 crc kubenswrapper[4897]: E0228 14:08:24.459024 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" Feb 28 14:08:32 crc kubenswrapper[4897]: E0228 14:08:32.090486 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:08:32 crc kubenswrapper[4897]: E0228 14:08:32.091558 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q58x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wh49x_openshift-marketplace(3eccae89-322c-4866-a4cb-044537aaa45a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:08:32 crc kubenswrapper[4897]: E0228 14:08:32.092865 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" Feb 28 14:08:36 crc kubenswrapper[4897]: I0228 14:08:36.477399 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:08:36 crc kubenswrapper[4897]: E0228 14:08:36.477992 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:08:36 crc kubenswrapper[4897]: I0228 14:08:36.994319 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" event={"ID":"60495189-1bea-4438-ad69-f56dd7caa7ac","Type":"ContainerStarted","Data":"f72fec8556de747b4d21b389e5f069a8648f1d514159d0b3f73d99a62834e132"} Feb 28 14:08:37 crc kubenswrapper[4897]: I0228 14:08:37.014884 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" podStartSLOduration=1.405555252 podStartE2EDuration="2m37.01486118s" podCreationTimestamp="2026-02-28 14:06:00 +0000 UTC" firstStartedPulling="2026-02-28 14:06:00.970110355 +0000 UTC m=+2975.212431012" lastFinishedPulling="2026-02-28 14:08:36.579416273 +0000 UTC m=+3130.821736940" observedRunningTime="2026-02-28 14:08:37.007680186 +0000 UTC m=+3131.250000883" watchObservedRunningTime="2026-02-28 14:08:37.01486118 +0000 UTC m=+3131.257181877" Feb 28 14:08:38 crc kubenswrapper[4897]: I0228 14:08:38.010127 4897 generic.go:334] "Generic (PLEG): container finished" podID="60495189-1bea-4438-ad69-f56dd7caa7ac" containerID="f72fec8556de747b4d21b389e5f069a8648f1d514159d0b3f73d99a62834e132" exitCode=0 Feb 28 14:08:38 crc kubenswrapper[4897]: I0228 14:08:38.010241 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" event={"ID":"60495189-1bea-4438-ad69-f56dd7caa7ac","Type":"ContainerDied","Data":"f72fec8556de747b4d21b389e5f069a8648f1d514159d0b3f73d99a62834e132"} Feb 28 14:08:39 crc kubenswrapper[4897]: I0228 14:08:39.424207 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" Feb 28 14:08:39 crc kubenswrapper[4897]: I0228 14:08:39.590678 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6mwz\" (UniqueName: \"kubernetes.io/projected/60495189-1bea-4438-ad69-f56dd7caa7ac-kube-api-access-p6mwz\") pod \"60495189-1bea-4438-ad69-f56dd7caa7ac\" (UID: \"60495189-1bea-4438-ad69-f56dd7caa7ac\") " Feb 28 14:08:39 crc kubenswrapper[4897]: I0228 14:08:39.599441 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60495189-1bea-4438-ad69-f56dd7caa7ac-kube-api-access-p6mwz" (OuterVolumeSpecName: "kube-api-access-p6mwz") pod "60495189-1bea-4438-ad69-f56dd7caa7ac" (UID: "60495189-1bea-4438-ad69-f56dd7caa7ac"). InnerVolumeSpecName "kube-api-access-p6mwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:08:39 crc kubenswrapper[4897]: I0228 14:08:39.693751 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6mwz\" (UniqueName: \"kubernetes.io/projected/60495189-1bea-4438-ad69-f56dd7caa7ac-kube-api-access-p6mwz\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:40 crc kubenswrapper[4897]: I0228 14:08:40.034114 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" event={"ID":"60495189-1bea-4438-ad69-f56dd7caa7ac","Type":"ContainerDied","Data":"31f0163550eb3d955f168345fd822c427bf53e004192fe6d1f9a657ccf69b599"} Feb 28 14:08:40 crc kubenswrapper[4897]: I0228 14:08:40.034192 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31f0163550eb3d955f168345fd822c427bf53e004192fe6d1f9a657ccf69b599" Feb 28 14:08:40 crc kubenswrapper[4897]: I0228 14:08:40.034277 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538126-tjh6m" Feb 28 14:08:40 crc kubenswrapper[4897]: I0228 14:08:40.111108 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538122-wsj4k"] Feb 28 14:08:40 crc kubenswrapper[4897]: I0228 14:08:40.124579 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538122-wsj4k"] Feb 28 14:08:40 crc kubenswrapper[4897]: I0228 14:08:40.476015 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd80f42b-c46b-4599-9fe2-00993454a32c" path="/var/lib/kubelet/pods/fd80f42b-c46b-4599-9fe2-00993454a32c/volumes" Feb 28 14:08:45 crc kubenswrapper[4897]: E0228 14:08:45.458708 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" Feb 28 14:08:46 crc kubenswrapper[4897]: I0228 14:08:46.110744 4897 generic.go:334] "Generic (PLEG): container finished" podID="8356fe56-9405-43be-8d6e-3d71c9906864" containerID="b90e8d844bfd7ebbf9f5daed290b161111f2ab6486043d5243a5290bf4b73096" exitCode=0 Feb 28 14:08:46 crc kubenswrapper[4897]: I0228 14:08:46.110814 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" event={"ID":"8356fe56-9405-43be-8d6e-3d71c9906864","Type":"ContainerDied","Data":"b90e8d844bfd7ebbf9f5daed290b161111f2ab6486043d5243a5290bf4b73096"} Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.153022 4897 scope.go:117] "RemoveContainer" containerID="6ac879a950b3e8cc9209504f481d9bef158ec252038ccddeeff6fc6a13e53bfb" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.231105 4897 scope.go:117] "RemoveContainer" containerID="d53dba889f3efc44406212325d2c49824f5e648ac67ac62d9ff8022a7ac2b54b" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.302776 4897 scope.go:117] "RemoveContainer" containerID="042673c4cd91c17244c4ee5ceab9ee68184ada512bceb689de9ed271397d4e25" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.457426 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:08:47 crc kubenswrapper[4897]: E0228 14:08:47.457783 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.640102 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.771853 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-2\") pod \"8356fe56-9405-43be-8d6e-3d71c9906864\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.771922 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ssh-key-openstack-edpm-ipam\") pod \"8356fe56-9405-43be-8d6e-3d71c9906864\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.772067 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-0\") pod \"8356fe56-9405-43be-8d6e-3d71c9906864\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.772149 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc4tk\" (UniqueName: \"kubernetes.io/projected/8356fe56-9405-43be-8d6e-3d71c9906864-kube-api-access-bc4tk\") pod \"8356fe56-9405-43be-8d6e-3d71c9906864\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.772181 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-1\") pod \"8356fe56-9405-43be-8d6e-3d71c9906864\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.772233 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-telemetry-combined-ca-bundle\") pod \"8356fe56-9405-43be-8d6e-3d71c9906864\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.772283 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-inventory\") pod \"8356fe56-9405-43be-8d6e-3d71c9906864\" (UID: \"8356fe56-9405-43be-8d6e-3d71c9906864\") " Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.779846 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8356fe56-9405-43be-8d6e-3d71c9906864-kube-api-access-bc4tk" (OuterVolumeSpecName: "kube-api-access-bc4tk") pod "8356fe56-9405-43be-8d6e-3d71c9906864" (UID: "8356fe56-9405-43be-8d6e-3d71c9906864"). InnerVolumeSpecName "kube-api-access-bc4tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.780183 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8356fe56-9405-43be-8d6e-3d71c9906864" (UID: "8356fe56-9405-43be-8d6e-3d71c9906864"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.815213 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8356fe56-9405-43be-8d6e-3d71c9906864" (UID: "8356fe56-9405-43be-8d6e-3d71c9906864"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.817069 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8356fe56-9405-43be-8d6e-3d71c9906864" (UID: "8356fe56-9405-43be-8d6e-3d71c9906864"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.826806 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8356fe56-9405-43be-8d6e-3d71c9906864" (UID: "8356fe56-9405-43be-8d6e-3d71c9906864"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.827431 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8356fe56-9405-43be-8d6e-3d71c9906864" (UID: "8356fe56-9405-43be-8d6e-3d71c9906864"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.835966 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-inventory" (OuterVolumeSpecName: "inventory") pod "8356fe56-9405-43be-8d6e-3d71c9906864" (UID: "8356fe56-9405-43be-8d6e-3d71c9906864"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.875239 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.875289 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.875335 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc4tk\" (UniqueName: \"kubernetes.io/projected/8356fe56-9405-43be-8d6e-3d71c9906864-kube-api-access-bc4tk\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.875357 4897 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.875381 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.875405 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:47 crc kubenswrapper[4897]: I0228 14:08:47.875430 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8356fe56-9405-43be-8d6e-3d71c9906864-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 14:08:48 crc kubenswrapper[4897]: I0228 14:08:48.142891 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" Feb 28 14:08:48 crc kubenswrapper[4897]: I0228 14:08:48.142936 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb" event={"ID":"8356fe56-9405-43be-8d6e-3d71c9906864","Type":"ContainerDied","Data":"467328c74c25084a11bb6644ca6dda34c8dc35625173f187bc5ad1ba4fe0a32d"} Feb 28 14:08:48 crc kubenswrapper[4897]: I0228 14:08:48.143002 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="467328c74c25084a11bb6644ca6dda34c8dc35625173f187bc5ad1ba4fe0a32d" Feb 28 14:08:58 crc kubenswrapper[4897]: I0228 14:08:58.456954 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:08:58 crc kubenswrapper[4897]: E0228 14:08:58.458059 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:08:59 crc kubenswrapper[4897]: E0228 14:08:59.459812 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" Feb 28 14:09:09 crc kubenswrapper[4897]: I0228 14:09:09.456871 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:09:09 crc kubenswrapper[4897]: E0228 14:09:09.457775 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:09:11 crc kubenswrapper[4897]: E0228 14:09:11.460229 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.457694 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:09:21 crc kubenswrapper[4897]: E0228 14:09:21.458970 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.992448 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 28 14:09:21 crc kubenswrapper[4897]: E0228 14:09:21.992908 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" containerName="oc" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.992933 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" containerName="oc" Feb 28 14:09:21 crc kubenswrapper[4897]: E0228 14:09:21.992982 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c071db2-5764-4d17-a5cb-1f0f7f54c4fb" containerName="oc" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.992991 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c071db2-5764-4d17-a5cb-1f0f7f54c4fb" containerName="oc" Feb 28 14:09:21 crc kubenswrapper[4897]: E0228 14:09:21.993008 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8356fe56-9405-43be-8d6e-3d71c9906864" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.993020 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8356fe56-9405-43be-8d6e-3d71c9906864" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.993230 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" containerName="oc" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.993272 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8356fe56-9405-43be-8d6e-3d71c9906864" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.993288 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c071db2-5764-4d17-a5cb-1f0f7f54c4fb" containerName="oc" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.994642 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 28 14:09:21 crc kubenswrapper[4897]: I0228 14:09:21.996630 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.029574 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.060994 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-nvme\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061089 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-run\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061229 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061265 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-scripts\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061292 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-lib-modules\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061388 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scl6q\" (UniqueName: \"kubernetes.io/projected/35d2e345-c465-43d1-a9e2-0592960bc377-kube-api-access-scl6q\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061424 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061478 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061512 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061725 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061816 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-config-data-custom\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061923 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-config-data\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.061973 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.062001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-sys\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.062024 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-dev\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.074919 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.077042 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.079735 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.113630 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.159551 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.162734 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163245 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163296 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mt9z\" (UniqueName: \"kubernetes.io/projected/622d265c-1cb2-47ac-b31e-5d226545d4de-kube-api-access-5mt9z\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163343 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-nvme\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163368 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163390 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-run\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163420 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163447 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163467 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-scripts\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163482 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-lib-modules\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163501 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163529 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scl6q\" (UniqueName: \"kubernetes.io/projected/35d2e345-c465-43d1-a9e2-0592960bc377-kube-api-access-scl6q\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163545 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163562 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-sys\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163581 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-run\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163599 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163618 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163633 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163660 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163693 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-dev\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163718 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-config-data-custom\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163744 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163829 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-nvme\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.163929 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.164166 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.164197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.164422 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165122 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-lib-modules\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165192 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-config-data\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165207 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165299 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165401 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165433 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-sys\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165452 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-sys\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165480 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-run\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165506 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-dev\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165552 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165599 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165607 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/35d2e345-c465-43d1-a9e2-0592960bc377-dev\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.165645 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.167531 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.170011 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-scripts\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.170200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-config-data\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.170610 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-config-data-custom\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.179306 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.185983 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d2e345-c465-43d1-a9e2-0592960bc377-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.188360 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scl6q\" (UniqueName: \"kubernetes.io/projected/35d2e345-c465-43d1-a9e2-0592960bc377-kube-api-access-scl6q\") pod \"cinder-backup-0\" (UID: \"35d2e345-c465-43d1-a9e2-0592960bc377\") " pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267614 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267684 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267722 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mt9z\" (UniqueName: \"kubernetes.io/projected/622d265c-1cb2-47ac-b31e-5d226545d4de-kube-api-access-5mt9z\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267743 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267839 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267887 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267917 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267937 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.267954 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268000 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268074 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268152 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268186 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268598 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268709 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-sys\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268794 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-run\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268854 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268882 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-sys\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268894 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268930 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-run\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.268956 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269085 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269089 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269217 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269283 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-dev\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269359 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269426 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgsxv\" (UniqueName: \"kubernetes.io/projected/6236a51d-66cb-4285-bc2b-767cf39c989a-kube-api-access-zgsxv\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269510 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269594 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269634 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269654 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-dev\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269757 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269808 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269870 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269924 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269752 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.270172 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.269833 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.270212 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/622d265c-1cb2-47ac-b31e-5d226545d4de-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.271731 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.272342 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.273225 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.273669 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/622d265c-1cb2-47ac-b31e-5d226545d4de-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.284012 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mt9z\" (UniqueName: \"kubernetes.io/projected/622d265c-1cb2-47ac-b31e-5d226545d4de-kube-api-access-5mt9z\") pod \"cinder-volume-nfs-0\" (UID: \"622d265c-1cb2-47ac-b31e-5d226545d4de\") " pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.348644 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372336 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372418 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372450 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372478 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372494 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372517 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372555 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372577 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgsxv\" (UniqueName: \"kubernetes.io/projected/6236a51d-66cb-4285-bc2b-767cf39c989a-kube-api-access-zgsxv\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372618 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372640 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372680 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372695 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372720 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372737 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372750 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.372828 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.373475 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.373513 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.373641 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.373690 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.373713 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.373743 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.373765 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.373801 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.374044 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6236a51d-66cb-4285-bc2b-767cf39c989a-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.376809 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.377112 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.378085 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.378798 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6236a51d-66cb-4285-bc2b-767cf39c989a-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.396994 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgsxv\" (UniqueName: \"kubernetes.io/projected/6236a51d-66cb-4285-bc2b-767cf39c989a-kube-api-access-zgsxv\") pod \"cinder-volume-nfs-2-0\" (UID: \"6236a51d-66cb-4285-bc2b-767cf39c989a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.398804 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.437753 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:22 crc kubenswrapper[4897]: I0228 14:09:22.895366 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.064712 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.772205 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh49x" event={"ID":"3eccae89-322c-4866-a4cb-044537aaa45a","Type":"ContainerStarted","Data":"d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d"} Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.774609 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"6236a51d-66cb-4285-bc2b-767cf39c989a","Type":"ContainerStarted","Data":"1de1925637d4b6183ec3b51ec9a7cf221c54120a7c6a411e4570d018719a92ab"} Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.774719 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"6236a51d-66cb-4285-bc2b-767cf39c989a","Type":"ContainerStarted","Data":"f8973f0fde9a7d20f2ac4d09bb79113a5e38aba504fa74d9539a564d89ebb28f"} Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.774797 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"6236a51d-66cb-4285-bc2b-767cf39c989a","Type":"ContainerStarted","Data":"6b4227f2371b3eb6844dcd3a55f72897f6d272438553eaeac834583b8433c398"} Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.779072 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"35d2e345-c465-43d1-a9e2-0592960bc377","Type":"ContainerStarted","Data":"ebf9859c096ecf76d82f404da5f07f157578ff93e4f50e03e59f232eb1afc036"} Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.779129 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"35d2e345-c465-43d1-a9e2-0592960bc377","Type":"ContainerStarted","Data":"71c9a8eaea14f66242956484ae80131a36e3a5a38f02a077aaef3da75cd468b9"} Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.779143 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"35d2e345-c465-43d1-a9e2-0592960bc377","Type":"ContainerStarted","Data":"240d768680425311091f3e03da16cf9416147737cba60c427e871720eae1459e"} Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.819941 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.535041675 podStartE2EDuration="2.819920478s" podCreationTimestamp="2026-02-28 14:09:21 +0000 UTC" firstStartedPulling="2026-02-28 14:09:22.900852933 +0000 UTC m=+3177.143173590" lastFinishedPulling="2026-02-28 14:09:23.185731726 +0000 UTC m=+3177.428052393" observedRunningTime="2026-02-28 14:09:23.809301026 +0000 UTC m=+3178.051621693" watchObservedRunningTime="2026-02-28 14:09:23.819920478 +0000 UTC m=+3178.062241145" Feb 28 14:09:23 crc kubenswrapper[4897]: I0228 14:09:23.847803 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=1.684829266 podStartE2EDuration="1.847777141s" podCreationTimestamp="2026-02-28 14:09:22 +0000 UTC" firstStartedPulling="2026-02-28 14:09:23.067564585 +0000 UTC m=+3177.309885252" lastFinishedPulling="2026-02-28 14:09:23.23051247 +0000 UTC m=+3177.472833127" observedRunningTime="2026-02-28 14:09:23.838748334 +0000 UTC m=+3178.081068991" watchObservedRunningTime="2026-02-28 14:09:23.847777141 +0000 UTC m=+3178.090097798" Feb 28 14:09:24 crc kubenswrapper[4897]: I0228 14:09:24.145120 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 28 14:09:24 crc kubenswrapper[4897]: I0228 14:09:24.793693 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"622d265c-1cb2-47ac-b31e-5d226545d4de","Type":"ContainerStarted","Data":"ac950009bd4fa3b215bb87341b82167c53beea972454da4ea3d3a2e6fc55c95e"} Feb 28 14:09:24 crc kubenswrapper[4897]: I0228 14:09:24.793929 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"622d265c-1cb2-47ac-b31e-5d226545d4de","Type":"ContainerStarted","Data":"9d1300c17c988155dfd58405c4c727236d13f2c722d6283b7166b9b6b89004a8"} Feb 28 14:09:24 crc kubenswrapper[4897]: I0228 14:09:24.793942 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"622d265c-1cb2-47ac-b31e-5d226545d4de","Type":"ContainerStarted","Data":"929d0fdc612bb062bfc01a6407bd21d6126a509b19e2c9b4839c0c8a19f9e5d0"} Feb 28 14:09:25 crc kubenswrapper[4897]: I0228 14:09:25.805951 4897 generic.go:334] "Generic (PLEG): container finished" podID="3eccae89-322c-4866-a4cb-044537aaa45a" containerID="d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d" exitCode=0 Feb 28 14:09:25 crc kubenswrapper[4897]: I0228 14:09:25.806048 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh49x" event={"ID":"3eccae89-322c-4866-a4cb-044537aaa45a","Type":"ContainerDied","Data":"d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d"} Feb 28 14:09:25 crc kubenswrapper[4897]: I0228 14:09:25.835699 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=3.835668894 podStartE2EDuration="3.835668894s" podCreationTimestamp="2026-02-28 14:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 14:09:25.832387061 +0000 UTC m=+3180.074707748" watchObservedRunningTime="2026-02-28 14:09:25.835668894 +0000 UTC m=+3180.077989581" Feb 28 14:09:26 crc kubenswrapper[4897]: I0228 14:09:26.818075 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh49x" event={"ID":"3eccae89-322c-4866-a4cb-044537aaa45a","Type":"ContainerStarted","Data":"bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb"} Feb 28 14:09:26 crc kubenswrapper[4897]: I0228 14:09:26.847511 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wh49x" podStartSLOduration=3.056571247 podStartE2EDuration="1m38.847492778s" podCreationTimestamp="2026-02-28 14:07:48 +0000 UTC" firstStartedPulling="2026-02-28 14:07:50.427961135 +0000 UTC m=+3084.670281832" lastFinishedPulling="2026-02-28 14:09:26.218882696 +0000 UTC m=+3180.461203363" observedRunningTime="2026-02-28 14:09:26.841814787 +0000 UTC m=+3181.084135494" watchObservedRunningTime="2026-02-28 14:09:26.847492778 +0000 UTC m=+3181.089813445" Feb 28 14:09:27 crc kubenswrapper[4897]: I0228 14:09:27.349467 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 28 14:09:27 crc kubenswrapper[4897]: I0228 14:09:27.399905 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:27 crc kubenswrapper[4897]: I0228 14:09:27.438434 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:29 crc kubenswrapper[4897]: I0228 14:09:29.183026 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:09:29 crc kubenswrapper[4897]: I0228 14:09:29.183972 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:09:30 crc kubenswrapper[4897]: I0228 14:09:30.238814 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="registry-server" probeResult="failure" output=< Feb 28 14:09:30 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:09:30 crc kubenswrapper[4897]: > Feb 28 14:09:32 crc kubenswrapper[4897]: I0228 14:09:32.456429 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:09:32 crc kubenswrapper[4897]: E0228 14:09:32.457870 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:09:32 crc kubenswrapper[4897]: I0228 14:09:32.630646 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 28 14:09:32 crc kubenswrapper[4897]: I0228 14:09:32.638907 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 28 14:09:32 crc kubenswrapper[4897]: I0228 14:09:32.685607 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 28 14:09:39 crc kubenswrapper[4897]: I0228 14:09:39.251301 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:09:39 crc kubenswrapper[4897]: I0228 14:09:39.339129 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:09:39 crc kubenswrapper[4897]: I0228 14:09:39.509611 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh49x"] Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.002563 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wh49x" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="registry-server" containerID="cri-o://bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb" gracePeriod=2 Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.630743 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.699578 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-utilities\") pod \"3eccae89-322c-4866-a4cb-044537aaa45a\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.699694 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q58x\" (UniqueName: \"kubernetes.io/projected/3eccae89-322c-4866-a4cb-044537aaa45a-kube-api-access-6q58x\") pod \"3eccae89-322c-4866-a4cb-044537aaa45a\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.699728 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-catalog-content\") pod \"3eccae89-322c-4866-a4cb-044537aaa45a\" (UID: \"3eccae89-322c-4866-a4cb-044537aaa45a\") " Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.700391 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-utilities" (OuterVolumeSpecName: "utilities") pod "3eccae89-322c-4866-a4cb-044537aaa45a" (UID: "3eccae89-322c-4866-a4cb-044537aaa45a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.709248 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eccae89-322c-4866-a4cb-044537aaa45a-kube-api-access-6q58x" (OuterVolumeSpecName: "kube-api-access-6q58x") pod "3eccae89-322c-4866-a4cb-044537aaa45a" (UID: "3eccae89-322c-4866-a4cb-044537aaa45a"). InnerVolumeSpecName "kube-api-access-6q58x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.745355 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3eccae89-322c-4866-a4cb-044537aaa45a" (UID: "3eccae89-322c-4866-a4cb-044537aaa45a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.801329 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q58x\" (UniqueName: \"kubernetes.io/projected/3eccae89-322c-4866-a4cb-044537aaa45a-kube-api-access-6q58x\") on node \"crc\" DevicePath \"\"" Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.801369 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:09:41 crc kubenswrapper[4897]: I0228 14:09:41.801382 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eccae89-322c-4866-a4cb-044537aaa45a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.019837 4897 generic.go:334] "Generic (PLEG): container finished" podID="3eccae89-322c-4866-a4cb-044537aaa45a" containerID="bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb" exitCode=0 Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.019925 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh49x" event={"ID":"3eccae89-322c-4866-a4cb-044537aaa45a","Type":"ContainerDied","Data":"bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb"} Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.019950 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh49x" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.019990 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh49x" event={"ID":"3eccae89-322c-4866-a4cb-044537aaa45a","Type":"ContainerDied","Data":"99657566a42a49f97dd2c5c9f58b755ab69a990cbe8d3439043bc9a05966dc25"} Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.020031 4897 scope.go:117] "RemoveContainer" containerID="bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.068978 4897 scope.go:117] "RemoveContainer" containerID="d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.078597 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh49x"] Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.091862 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh49x"] Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.105210 4897 scope.go:117] "RemoveContainer" containerID="c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.183973 4897 scope.go:117] "RemoveContainer" containerID="bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb" Feb 28 14:09:42 crc kubenswrapper[4897]: E0228 14:09:42.185071 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb\": container with ID starting with bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb not found: ID does not exist" containerID="bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.185111 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb"} err="failed to get container status \"bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb\": rpc error: code = NotFound desc = could not find container \"bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb\": container with ID starting with bc9bdccbc8a59dcf7814336f89afb9ab5017e20abf5ed7117d2575190ad1d1cb not found: ID does not exist" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.185137 4897 scope.go:117] "RemoveContainer" containerID="d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d" Feb 28 14:09:42 crc kubenswrapper[4897]: E0228 14:09:42.185549 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d\": container with ID starting with d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d not found: ID does not exist" containerID="d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.185575 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d"} err="failed to get container status \"d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d\": rpc error: code = NotFound desc = could not find container \"d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d\": container with ID starting with d36d7e9bfe400be9deb1e0600ba507a7a48621b9c134bba582b6dfe2ca29d69d not found: ID does not exist" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.185594 4897 scope.go:117] "RemoveContainer" containerID="c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c" Feb 28 14:09:42 crc kubenswrapper[4897]: E0228 14:09:42.187224 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c\": container with ID starting with c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c not found: ID does not exist" containerID="c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.187285 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c"} err="failed to get container status \"c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c\": rpc error: code = NotFound desc = could not find container \"c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c\": container with ID starting with c417b1559c7d9155f1596ac97346c1798a78e017e1e4bef59f7ec6af91b2571c not found: ID does not exist" Feb 28 14:09:42 crc kubenswrapper[4897]: I0228 14:09:42.508589 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" path="/var/lib/kubelet/pods/3eccae89-322c-4866-a4cb-044537aaa45a/volumes" Feb 28 14:09:45 crc kubenswrapper[4897]: I0228 14:09:45.456902 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:09:45 crc kubenswrapper[4897]: E0228 14:09:45.458085 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.177446 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538130-jm5gv"] Feb 28 14:10:00 crc kubenswrapper[4897]: E0228 14:10:00.178563 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="registry-server" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.178585 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="registry-server" Feb 28 14:10:00 crc kubenswrapper[4897]: E0228 14:10:00.178632 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="extract-utilities" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.178645 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="extract-utilities" Feb 28 14:10:00 crc kubenswrapper[4897]: E0228 14:10:00.178709 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="extract-content" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.178723 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="extract-content" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.179083 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eccae89-322c-4866-a4cb-044537aaa45a" containerName="registry-server" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.180115 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538130-jm5gv" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.184978 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.185482 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.185553 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.190815 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538130-jm5gv"] Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.259158 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxv4x\" (UniqueName: \"kubernetes.io/projected/606897b8-41d5-4034-92c2-bf0c7423d0ac-kube-api-access-qxv4x\") pod \"auto-csr-approver-29538130-jm5gv\" (UID: \"606897b8-41d5-4034-92c2-bf0c7423d0ac\") " pod="openshift-infra/auto-csr-approver-29538130-jm5gv" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.364537 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxv4x\" (UniqueName: \"kubernetes.io/projected/606897b8-41d5-4034-92c2-bf0c7423d0ac-kube-api-access-qxv4x\") pod \"auto-csr-approver-29538130-jm5gv\" (UID: \"606897b8-41d5-4034-92c2-bf0c7423d0ac\") " pod="openshift-infra/auto-csr-approver-29538130-jm5gv" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.395805 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxv4x\" (UniqueName: \"kubernetes.io/projected/606897b8-41d5-4034-92c2-bf0c7423d0ac-kube-api-access-qxv4x\") pod \"auto-csr-approver-29538130-jm5gv\" (UID: \"606897b8-41d5-4034-92c2-bf0c7423d0ac\") " pod="openshift-infra/auto-csr-approver-29538130-jm5gv" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.457650 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:10:00 crc kubenswrapper[4897]: E0228 14:10:00.458100 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:10:00 crc kubenswrapper[4897]: I0228 14:10:00.514959 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538130-jm5gv" Feb 28 14:10:01 crc kubenswrapper[4897]: I0228 14:10:01.087224 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538130-jm5gv"] Feb 28 14:10:01 crc kubenswrapper[4897]: W0228 14:10:01.088571 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod606897b8_41d5_4034_92c2_bf0c7423d0ac.slice/crio-320851d41b834b06062cf12b6ade324718b54ee885cec50c9d43318bf079503a WatchSource:0}: Error finding container 320851d41b834b06062cf12b6ade324718b54ee885cec50c9d43318bf079503a: Status 404 returned error can't find the container with id 320851d41b834b06062cf12b6ade324718b54ee885cec50c9d43318bf079503a Feb 28 14:10:01 crc kubenswrapper[4897]: I0228 14:10:01.275074 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538130-jm5gv" event={"ID":"606897b8-41d5-4034-92c2-bf0c7423d0ac","Type":"ContainerStarted","Data":"320851d41b834b06062cf12b6ade324718b54ee885cec50c9d43318bf079503a"} Feb 28 14:10:03 crc kubenswrapper[4897]: I0228 14:10:03.308253 4897 generic.go:334] "Generic (PLEG): container finished" podID="606897b8-41d5-4034-92c2-bf0c7423d0ac" containerID="cbb827dd29c9c575beeba819dea9626b1c21340186a62028d32cf6f2476dbee4" exitCode=0 Feb 28 14:10:03 crc kubenswrapper[4897]: I0228 14:10:03.308489 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538130-jm5gv" event={"ID":"606897b8-41d5-4034-92c2-bf0c7423d0ac","Type":"ContainerDied","Data":"cbb827dd29c9c575beeba819dea9626b1c21340186a62028d32cf6f2476dbee4"} Feb 28 14:10:04 crc kubenswrapper[4897]: I0228 14:10:04.738345 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538130-jm5gv" Feb 28 14:10:04 crc kubenswrapper[4897]: I0228 14:10:04.808919 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxv4x\" (UniqueName: \"kubernetes.io/projected/606897b8-41d5-4034-92c2-bf0c7423d0ac-kube-api-access-qxv4x\") pod \"606897b8-41d5-4034-92c2-bf0c7423d0ac\" (UID: \"606897b8-41d5-4034-92c2-bf0c7423d0ac\") " Feb 28 14:10:04 crc kubenswrapper[4897]: I0228 14:10:04.819792 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606897b8-41d5-4034-92c2-bf0c7423d0ac-kube-api-access-qxv4x" (OuterVolumeSpecName: "kube-api-access-qxv4x") pod "606897b8-41d5-4034-92c2-bf0c7423d0ac" (UID: "606897b8-41d5-4034-92c2-bf0c7423d0ac"). InnerVolumeSpecName "kube-api-access-qxv4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:10:04 crc kubenswrapper[4897]: I0228 14:10:04.911996 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxv4x\" (UniqueName: \"kubernetes.io/projected/606897b8-41d5-4034-92c2-bf0c7423d0ac-kube-api-access-qxv4x\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:05 crc kubenswrapper[4897]: I0228 14:10:05.340726 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538130-jm5gv" event={"ID":"606897b8-41d5-4034-92c2-bf0c7423d0ac","Type":"ContainerDied","Data":"320851d41b834b06062cf12b6ade324718b54ee885cec50c9d43318bf079503a"} Feb 28 14:10:05 crc kubenswrapper[4897]: I0228 14:10:05.340787 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320851d41b834b06062cf12b6ade324718b54ee885cec50c9d43318bf079503a" Feb 28 14:10:05 crc kubenswrapper[4897]: I0228 14:10:05.340790 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538130-jm5gv" Feb 28 14:10:05 crc kubenswrapper[4897]: I0228 14:10:05.841997 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538124-s5qrp"] Feb 28 14:10:05 crc kubenswrapper[4897]: I0228 14:10:05.856345 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538124-s5qrp"] Feb 28 14:10:06 crc kubenswrapper[4897]: I0228 14:10:06.478135 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6297be3-6c7c-40c6-823e-ab3e4233cd7d" path="/var/lib/kubelet/pods/f6297be3-6c7c-40c6-823e-ab3e4233cd7d/volumes" Feb 28 14:10:12 crc kubenswrapper[4897]: I0228 14:10:12.456134 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:10:12 crc kubenswrapper[4897]: E0228 14:10:12.457259 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:10:24 crc kubenswrapper[4897]: I0228 14:10:24.456845 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:10:24 crc kubenswrapper[4897]: E0228 14:10:24.457893 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:10:30 crc kubenswrapper[4897]: I0228 14:10:30.247209 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 14:10:30 crc kubenswrapper[4897]: I0228 14:10:30.249479 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="prometheus" containerID="cri-o://b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357" gracePeriod=600 Feb 28 14:10:30 crc kubenswrapper[4897]: I0228 14:10:30.249577 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="config-reloader" containerID="cri-o://cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601" gracePeriod=600 Feb 28 14:10:30 crc kubenswrapper[4897]: I0228 14:10:30.249577 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="thanos-sidecar" containerID="cri-o://a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717" gracePeriod=600 Feb 28 14:10:30 crc kubenswrapper[4897]: I0228 14:10:30.658555 4897 generic.go:334] "Generic (PLEG): container finished" podID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerID="a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717" exitCode=0 Feb 28 14:10:30 crc kubenswrapper[4897]: I0228 14:10:30.658585 4897 generic.go:334] "Generic (PLEG): container finished" podID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerID="b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357" exitCode=0 Feb 28 14:10:30 crc kubenswrapper[4897]: I0228 14:10:30.658604 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerDied","Data":"a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717"} Feb 28 14:10:30 crc kubenswrapper[4897]: I0228 14:10:30.658632 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerDied","Data":"b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357"} Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.352527 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.532277 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-thanos-prometheus-http-client-file\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.533552 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.533647 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.533676 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-tls-assets\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.533738 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.533827 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-1\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.533849 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgxrr\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-kube-api-access-kgxrr\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.533913 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-2\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.533948 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-secret-combined-ca-bundle\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.534020 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.534049 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.534077 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config-out\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.534104 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-0\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.535102 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.535578 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.537794 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.539510 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.542057 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config" (OuterVolumeSpecName: "config") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.544239 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-kube-api-access-kgxrr" (OuterVolumeSpecName: "kube-api-access-kgxrr") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "kube-api-access-kgxrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.544291 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.544725 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.544767 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.545361 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config-out" (OuterVolumeSpecName: "config-out") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.567364 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.605948 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.635581 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config" (OuterVolumeSpecName: "web-config") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.636441 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config\") pod \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\" (UID: \"ed5ef2f7-8287-429b-ba57-6ade31e8e43c\") " Feb 28 14:10:31 crc kubenswrapper[4897]: W0228 14:10:31.636569 4897 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ed5ef2f7-8287-429b-ba57-6ade31e8e43c/volumes/kubernetes.io~secret/web-config Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.636590 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config" (OuterVolumeSpecName: "web-config") pod "ed5ef2f7-8287-429b-ba57-6ade31e8e43c" (UID: "ed5ef2f7-8287-429b-ba57-6ade31e8e43c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637109 4897 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637145 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") on node \"crc\" " Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637161 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637173 4897 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637186 4897 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637201 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637215 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgxrr\" (UniqueName: \"kubernetes.io/projected/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-kube-api-access-kgxrr\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637228 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637240 4897 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637252 4897 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637264 4897 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637276 4897 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-config-out\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.637288 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ed5ef2f7-8287-429b-ba57-6ade31e8e43c-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.671643 4897 generic.go:334] "Generic (PLEG): container finished" podID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerID="cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601" exitCode=0 Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.671692 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerDied","Data":"cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601"} Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.671724 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ed5ef2f7-8287-429b-ba57-6ade31e8e43c","Type":"ContainerDied","Data":"67367fc3a6f47595fa668891a2b8427022e55b26c109b9f3e639defeda2919f6"} Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.671743 4897 scope.go:117] "RemoveContainer" containerID="a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.671967 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.688404 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.688557 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9") on node "crc" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.728065 4897 scope.go:117] "RemoveContainer" containerID="cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.745135 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") on node \"crc\" DevicePath \"\"" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.753651 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.759914 4897 scope.go:117] "RemoveContainer" containerID="b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.761673 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.787739 4897 scope.go:117] "RemoveContainer" containerID="724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812218 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.812631 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="prometheus" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812647 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="prometheus" Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.812678 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="config-reloader" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812684 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="config-reloader" Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.812691 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606897b8-41d5-4034-92c2-bf0c7423d0ac" containerName="oc" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812696 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="606897b8-41d5-4034-92c2-bf0c7423d0ac" containerName="oc" Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.812715 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="thanos-sidecar" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812720 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="thanos-sidecar" Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.812734 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="init-config-reloader" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812740 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="init-config-reloader" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812899 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="606897b8-41d5-4034-92c2-bf0c7423d0ac" containerName="oc" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812912 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="thanos-sidecar" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812925 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="prometheus" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.812943 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" containerName="config-reloader" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.816899 4897 scope.go:117] "RemoveContainer" containerID="a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717" Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.820455 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717\": container with ID starting with a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717 not found: ID does not exist" containerID="a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.820495 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717"} err="failed to get container status \"a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717\": rpc error: code = NotFound desc = could not find container \"a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717\": container with ID starting with a365c534d055185eca097a572503a7f4930b332977f59323556a12edf1c1c717 not found: ID does not exist" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.820521 4897 scope.go:117] "RemoveContainer" containerID="cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601" Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.824436 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601\": container with ID starting with cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601 not found: ID does not exist" containerID="cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.824473 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601"} err="failed to get container status \"cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601\": rpc error: code = NotFound desc = could not find container \"cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601\": container with ID starting with cc477811f2b61ff85c16f720556fb8db4898587dc7e91ea7f258a6eb0e4b4601 not found: ID does not exist" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.824494 4897 scope.go:117] "RemoveContainer" containerID="b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357" Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.825839 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357\": container with ID starting with b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357 not found: ID does not exist" containerID="b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.825858 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357"} err="failed to get container status \"b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357\": rpc error: code = NotFound desc = could not find container \"b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357\": container with ID starting with b2dd142e69e8ad7b8118666bbfcad750c2fc36286c5a211b62b1c77b5eade357 not found: ID does not exist" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.825872 4897 scope.go:117] "RemoveContainer" containerID="724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.826163 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.829583 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 28 14:10:31 crc kubenswrapper[4897]: E0228 14:10:31.829763 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133\": container with ID starting with 724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133 not found: ID does not exist" containerID="724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.829787 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133"} err="failed to get container status \"724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133\": rpc error: code = NotFound desc = could not find container \"724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133\": container with ID starting with 724d8fc7909066e1ff776c2976f3d28052bbf11cfb5bf216d973351d020fc133 not found: ID does not exist" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.834800 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.834957 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.835053 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.835726 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.835838 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-6zn4s" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.843795 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.852636 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.853023 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951458 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6xh\" (UniqueName: \"kubernetes.io/projected/6b56bf6f-f92e-4b96-a449-597cee08338d-kube-api-access-2v6xh\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951508 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6b56bf6f-f92e-4b96-a449-597cee08338d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951530 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6b56bf6f-f92e-4b96-a449-597cee08338d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951563 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-config\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951583 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951631 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951675 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951693 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951716 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951734 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951775 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951802 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:31 crc kubenswrapper[4897]: I0228 14:10:31.951821 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053336 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053410 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053428 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053455 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053476 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053519 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053553 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053575 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053616 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v6xh\" (UniqueName: \"kubernetes.io/projected/6b56bf6f-f92e-4b96-a449-597cee08338d-kube-api-access-2v6xh\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053633 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6b56bf6f-f92e-4b96-a449-597cee08338d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053650 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6b56bf6f-f92e-4b96-a449-597cee08338d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053678 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-config\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.053699 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.054425 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.054779 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.055080 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6b56bf6f-f92e-4b96-a449-597cee08338d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.056916 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.057209 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.057234 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de9fbdfeb629ec9e72fb17ffcc3a651e10bfb0662587d0069f50b747406f5447/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.057282 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.059692 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.059777 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6b56bf6f-f92e-4b96-a449-597cee08338d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.060266 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.061086 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-config\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.062375 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6b56bf6f-f92e-4b96-a449-597cee08338d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.064734 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6b56bf6f-f92e-4b96-a449-597cee08338d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.073487 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v6xh\" (UniqueName: \"kubernetes.io/projected/6b56bf6f-f92e-4b96-a449-597cee08338d-kube-api-access-2v6xh\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.107691 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d71e802d-2856-4fde-a3c2-17d9a7bcbfe9\") pod \"prometheus-metric-storage-0\" (UID: \"6b56bf6f-f92e-4b96-a449-597cee08338d\") " pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.188815 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.471526 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed5ef2f7-8287-429b-ba57-6ade31e8e43c" path="/var/lib/kubelet/pods/ed5ef2f7-8287-429b-ba57-6ade31e8e43c/volumes" Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.646075 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 28 14:10:32 crc kubenswrapper[4897]: I0228 14:10:32.724274 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6b56bf6f-f92e-4b96-a449-597cee08338d","Type":"ContainerStarted","Data":"1d46e79bd7eb5c9d3157c06e069a0ff35ffb3fd56fb351a20df7df62bacc279c"} Feb 28 14:10:36 crc kubenswrapper[4897]: I0228 14:10:36.769329 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6b56bf6f-f92e-4b96-a449-597cee08338d","Type":"ContainerStarted","Data":"0aef589fbb3b934295f3d8dd77e4423a3184d4f54e4612d4f849f95f9cee0beb"} Feb 28 14:10:38 crc kubenswrapper[4897]: I0228 14:10:38.462868 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:10:38 crc kubenswrapper[4897]: E0228 14:10:38.463813 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:10:45 crc kubenswrapper[4897]: I0228 14:10:45.869750 4897 generic.go:334] "Generic (PLEG): container finished" podID="6b56bf6f-f92e-4b96-a449-597cee08338d" containerID="0aef589fbb3b934295f3d8dd77e4423a3184d4f54e4612d4f849f95f9cee0beb" exitCode=0 Feb 28 14:10:45 crc kubenswrapper[4897]: I0228 14:10:45.869862 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6b56bf6f-f92e-4b96-a449-597cee08338d","Type":"ContainerDied","Data":"0aef589fbb3b934295f3d8dd77e4423a3184d4f54e4612d4f849f95f9cee0beb"} Feb 28 14:10:46 crc kubenswrapper[4897]: I0228 14:10:46.895147 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6b56bf6f-f92e-4b96-a449-597cee08338d","Type":"ContainerStarted","Data":"c4b0bf1c6863c58a7aaec13091676ea3fa4e595b847a31647f8efb1d46f248c7"} Feb 28 14:10:47 crc kubenswrapper[4897]: I0228 14:10:47.485827 4897 scope.go:117] "RemoveContainer" containerID="eecbab544b99670a28fbc23563955f83e547b1af40415cf03fe81c717c036dfa" Feb 28 14:10:47 crc kubenswrapper[4897]: I0228 14:10:47.536208 4897 scope.go:117] "RemoveContainer" containerID="1a1a87745437860ac1e22bf1d7217912ac33c81bb6ffb45b8ddceecce6c970d2" Feb 28 14:10:47 crc kubenswrapper[4897]: I0228 14:10:47.620590 4897 scope.go:117] "RemoveContainer" containerID="6a8db35ae3933930f29fd1c59cba975cfd3d459a94ecbcca05d678bafdea7054" Feb 28 14:10:49 crc kubenswrapper[4897]: I0228 14:10:49.928522 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6b56bf6f-f92e-4b96-a449-597cee08338d","Type":"ContainerStarted","Data":"fb53c037d36f470079d04239dd00fcfe44ba176cc721b83dfb26d074787142a9"} Feb 28 14:10:50 crc kubenswrapper[4897]: I0228 14:10:50.940793 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6b56bf6f-f92e-4b96-a449-597cee08338d","Type":"ContainerStarted","Data":"b7b079c683856fd672d3b881acabe673711be18ac2c28b0b3c1b225c50733abf"} Feb 28 14:10:50 crc kubenswrapper[4897]: I0228 14:10:50.995829 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.995808036 podStartE2EDuration="19.995808036s" podCreationTimestamp="2026-02-28 14:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 14:10:50.989047984 +0000 UTC m=+3265.231368641" watchObservedRunningTime="2026-02-28 14:10:50.995808036 +0000 UTC m=+3265.238128693" Feb 28 14:10:52 crc kubenswrapper[4897]: I0228 14:10:52.189396 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 28 14:10:52 crc kubenswrapper[4897]: I0228 14:10:52.457189 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:10:52 crc kubenswrapper[4897]: E0228 14:10:52.457677 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:11:02 crc kubenswrapper[4897]: I0228 14:11:02.189371 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 28 14:11:02 crc kubenswrapper[4897]: I0228 14:11:02.194983 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 28 14:11:03 crc kubenswrapper[4897]: I0228 14:11:03.086452 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 28 14:11:05 crc kubenswrapper[4897]: I0228 14:11:05.456952 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:11:06 crc kubenswrapper[4897]: I0228 14:11:06.122252 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"46622fdc81b8121032a4f67fbad65c2518e46caf160555ea308231319df04528"} Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.200254 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.203572 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.205340 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.206002 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9dtkj" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.206295 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.209466 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.213340 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.298595 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.298714 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.298746 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.298766 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t99ks\" (UniqueName: \"kubernetes.io/projected/49f3154b-02e1-4da4-a498-58e7280a8a64-kube-api-access-t99ks\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.298854 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.298872 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.298891 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.299066 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.299282 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-config-data\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.400908 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.400952 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.400977 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.401009 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.401069 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-config-data\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.401115 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.401177 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.401205 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.401228 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t99ks\" (UniqueName: \"kubernetes.io/projected/49f3154b-02e1-4da4-a498-58e7280a8a64-kube-api-access-t99ks\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.401776 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.402126 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.402366 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.402455 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.402523 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-config-data\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.409175 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.418425 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.419147 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.419748 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t99ks\" (UniqueName: \"kubernetes.io/projected/49f3154b-02e1-4da4-a498-58e7280a8a64-kube-api-access-t99ks\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.451671 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " pod="openstack/tempest-tests-tempest" Feb 28 14:11:07 crc kubenswrapper[4897]: I0228 14:11:07.545885 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 14:11:08 crc kubenswrapper[4897]: I0228 14:11:08.085398 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 14:11:08 crc kubenswrapper[4897]: I0228 14:11:08.147476 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"49f3154b-02e1-4da4-a498-58e7280a8a64","Type":"ContainerStarted","Data":"9daf828feb63a9aebe4e3c35bb09466bbd1ecb566d0f634928759f33c6d872ed"} Feb 28 14:11:20 crc kubenswrapper[4897]: I0228 14:11:20.283271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"49f3154b-02e1-4da4-a498-58e7280a8a64","Type":"ContainerStarted","Data":"4d12dbf6d72f4df26c5b26963b5aea69bfa544ba18bb59a6296a21341be84847"} Feb 28 14:11:20 crc kubenswrapper[4897]: I0228 14:11:20.309383 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.526793234 podStartE2EDuration="14.30936307s" podCreationTimestamp="2026-02-28 14:11:06 +0000 UTC" firstStartedPulling="2026-02-28 14:11:08.085771519 +0000 UTC m=+3282.328092186" lastFinishedPulling="2026-02-28 14:11:18.868341365 +0000 UTC m=+3293.110662022" observedRunningTime="2026-02-28 14:11:20.303979927 +0000 UTC m=+3294.546300624" watchObservedRunningTime="2026-02-28 14:11:20.30936307 +0000 UTC m=+3294.551683737" Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.227922 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538132-kwg2w"] Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.233275 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538132-kwg2w" Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.243796 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.245953 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.246185 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.249249 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538132-kwg2w"] Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.414547 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgq2\" (UniqueName: \"kubernetes.io/projected/31c297b7-8688-439d-b935-90faa8af4f55-kube-api-access-7kgq2\") pod \"auto-csr-approver-29538132-kwg2w\" (UID: \"31c297b7-8688-439d-b935-90faa8af4f55\") " pod="openshift-infra/auto-csr-approver-29538132-kwg2w" Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.516051 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgq2\" (UniqueName: \"kubernetes.io/projected/31c297b7-8688-439d-b935-90faa8af4f55-kube-api-access-7kgq2\") pod \"auto-csr-approver-29538132-kwg2w\" (UID: \"31c297b7-8688-439d-b935-90faa8af4f55\") " pod="openshift-infra/auto-csr-approver-29538132-kwg2w" Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.548042 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgq2\" (UniqueName: \"kubernetes.io/projected/31c297b7-8688-439d-b935-90faa8af4f55-kube-api-access-7kgq2\") pod \"auto-csr-approver-29538132-kwg2w\" (UID: \"31c297b7-8688-439d-b935-90faa8af4f55\") " pod="openshift-infra/auto-csr-approver-29538132-kwg2w" Feb 28 14:12:00 crc kubenswrapper[4897]: I0228 14:12:00.558473 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538132-kwg2w" Feb 28 14:12:01 crc kubenswrapper[4897]: I0228 14:12:01.003570 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538132-kwg2w"] Feb 28 14:12:01 crc kubenswrapper[4897]: I0228 14:12:01.777041 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538132-kwg2w" event={"ID":"31c297b7-8688-439d-b935-90faa8af4f55","Type":"ContainerStarted","Data":"a7a77038baaf8336556dcfee2c3a89995b287220174049f84a9ac29d9b4b9df4"} Feb 28 14:12:02 crc kubenswrapper[4897]: I0228 14:12:02.806441 4897 generic.go:334] "Generic (PLEG): container finished" podID="31c297b7-8688-439d-b935-90faa8af4f55" containerID="d8946e594a54d3af87bd4c7d1abca42856763cf3240b9f89764bbd4ff09a0f70" exitCode=0 Feb 28 14:12:02 crc kubenswrapper[4897]: I0228 14:12:02.806629 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538132-kwg2w" event={"ID":"31c297b7-8688-439d-b935-90faa8af4f55","Type":"ContainerDied","Data":"d8946e594a54d3af87bd4c7d1abca42856763cf3240b9f89764bbd4ff09a0f70"} Feb 28 14:12:04 crc kubenswrapper[4897]: I0228 14:12:04.271936 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538132-kwg2w" Feb 28 14:12:04 crc kubenswrapper[4897]: I0228 14:12:04.321872 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kgq2\" (UniqueName: \"kubernetes.io/projected/31c297b7-8688-439d-b935-90faa8af4f55-kube-api-access-7kgq2\") pod \"31c297b7-8688-439d-b935-90faa8af4f55\" (UID: \"31c297b7-8688-439d-b935-90faa8af4f55\") " Feb 28 14:12:04 crc kubenswrapper[4897]: I0228 14:12:04.328304 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c297b7-8688-439d-b935-90faa8af4f55-kube-api-access-7kgq2" (OuterVolumeSpecName: "kube-api-access-7kgq2") pod "31c297b7-8688-439d-b935-90faa8af4f55" (UID: "31c297b7-8688-439d-b935-90faa8af4f55"). InnerVolumeSpecName "kube-api-access-7kgq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:12:04 crc kubenswrapper[4897]: I0228 14:12:04.425284 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kgq2\" (UniqueName: \"kubernetes.io/projected/31c297b7-8688-439d-b935-90faa8af4f55-kube-api-access-7kgq2\") on node \"crc\" DevicePath \"\"" Feb 28 14:12:04 crc kubenswrapper[4897]: I0228 14:12:04.824979 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538132-kwg2w" event={"ID":"31c297b7-8688-439d-b935-90faa8af4f55","Type":"ContainerDied","Data":"a7a77038baaf8336556dcfee2c3a89995b287220174049f84a9ac29d9b4b9df4"} Feb 28 14:12:04 crc kubenswrapper[4897]: I0228 14:12:04.825015 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7a77038baaf8336556dcfee2c3a89995b287220174049f84a9ac29d9b4b9df4" Feb 28 14:12:04 crc kubenswrapper[4897]: I0228 14:12:04.825102 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538132-kwg2w" Feb 28 14:12:05 crc kubenswrapper[4897]: I0228 14:12:05.369751 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538126-tjh6m"] Feb 28 14:12:05 crc kubenswrapper[4897]: I0228 14:12:05.387403 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538126-tjh6m"] Feb 28 14:12:06 crc kubenswrapper[4897]: I0228 14:12:06.478459 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60495189-1bea-4438-ad69-f56dd7caa7ac" path="/var/lib/kubelet/pods/60495189-1bea-4438-ad69-f56dd7caa7ac/volumes" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.271901 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f8qxx"] Feb 28 14:12:29 crc kubenswrapper[4897]: E0228 14:12:29.273040 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c297b7-8688-439d-b935-90faa8af4f55" containerName="oc" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.273076 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c297b7-8688-439d-b935-90faa8af4f55" containerName="oc" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.273524 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c297b7-8688-439d-b935-90faa8af4f55" containerName="oc" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.275587 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.285594 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f8qxx"] Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.322495 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-catalog-content\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.322533 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmzr\" (UniqueName: \"kubernetes.io/projected/497308c2-3fca-4543-a22d-4ac840155887-kube-api-access-zmmzr\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.322609 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-utilities\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.424899 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmmzr\" (UniqueName: \"kubernetes.io/projected/497308c2-3fca-4543-a22d-4ac840155887-kube-api-access-zmmzr\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.424965 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-catalog-content\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.425015 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-utilities\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.425490 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-catalog-content\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.425553 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-utilities\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.453055 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmmzr\" (UniqueName: \"kubernetes.io/projected/497308c2-3fca-4543-a22d-4ac840155887-kube-api-access-zmmzr\") pod \"redhat-operators-f8qxx\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:29 crc kubenswrapper[4897]: I0228 14:12:29.639957 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:12:30 crc kubenswrapper[4897]: W0228 14:12:30.142472 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod497308c2_3fca_4543_a22d_4ac840155887.slice/crio-4a8e91f9675338004659108d0a0c932cec9231d18064c10849bdea698affa621 WatchSource:0}: Error finding container 4a8e91f9675338004659108d0a0c932cec9231d18064c10849bdea698affa621: Status 404 returned error can't find the container with id 4a8e91f9675338004659108d0a0c932cec9231d18064c10849bdea698affa621 Feb 28 14:12:30 crc kubenswrapper[4897]: I0228 14:12:30.142686 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f8qxx"] Feb 28 14:12:31 crc kubenswrapper[4897]: I0228 14:12:31.137231 4897 generic.go:334] "Generic (PLEG): container finished" podID="497308c2-3fca-4543-a22d-4ac840155887" containerID="b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282" exitCode=0 Feb 28 14:12:31 crc kubenswrapper[4897]: I0228 14:12:31.137356 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8qxx" event={"ID":"497308c2-3fca-4543-a22d-4ac840155887","Type":"ContainerDied","Data":"b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282"} Feb 28 14:12:31 crc kubenswrapper[4897]: I0228 14:12:31.139240 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8qxx" event={"ID":"497308c2-3fca-4543-a22d-4ac840155887","Type":"ContainerStarted","Data":"4a8e91f9675338004659108d0a0c932cec9231d18064c10849bdea698affa621"} Feb 28 14:12:31 crc kubenswrapper[4897]: I0228 14:12:31.140391 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:12:32 crc kubenswrapper[4897]: E0228 14:12:32.096584 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 14:12:32 crc kubenswrapper[4897]: E0228 14:12:32.096738 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmmzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f8qxx_openshift-marketplace(497308c2-3fca-4543-a22d-4ac840155887): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:12:32 crc kubenswrapper[4897]: E0228 14:12:32.098041 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-f8qxx" podUID="497308c2-3fca-4543-a22d-4ac840155887" Feb 28 14:12:32 crc kubenswrapper[4897]: E0228 14:12:32.147348 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f8qxx" podUID="497308c2-3fca-4543-a22d-4ac840155887" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.261500 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7sk8p"] Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.264618 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.283468 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sk8p"] Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.381896 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n52k9\" (UniqueName: \"kubernetes.io/projected/75d37307-e756-40f6-aa6f-c017f393e1ba-kube-api-access-n52k9\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.382053 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-utilities\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.382082 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-catalog-content\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.484208 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-utilities\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.484261 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-catalog-content\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.484367 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n52k9\" (UniqueName: \"kubernetes.io/projected/75d37307-e756-40f6-aa6f-c017f393e1ba-kube-api-access-n52k9\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.484868 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-utilities\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.486663 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-catalog-content\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.512358 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n52k9\" (UniqueName: \"kubernetes.io/projected/75d37307-e756-40f6-aa6f-c017f393e1ba-kube-api-access-n52k9\") pod \"community-operators-7sk8p\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:36 crc kubenswrapper[4897]: I0228 14:12:36.606583 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:37 crc kubenswrapper[4897]: I0228 14:12:37.150725 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sk8p"] Feb 28 14:12:37 crc kubenswrapper[4897]: I0228 14:12:37.224067 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sk8p" event={"ID":"75d37307-e756-40f6-aa6f-c017f393e1ba","Type":"ContainerStarted","Data":"0b746f5cd823c9d7c40522ce5686704720df42b935af04b6577355a12929f402"} Feb 28 14:12:38 crc kubenswrapper[4897]: I0228 14:12:38.236607 4897 generic.go:334] "Generic (PLEG): container finished" podID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerID="aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b" exitCode=0 Feb 28 14:12:38 crc kubenswrapper[4897]: I0228 14:12:38.236760 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sk8p" event={"ID":"75d37307-e756-40f6-aa6f-c017f393e1ba","Type":"ContainerDied","Data":"aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b"} Feb 28 14:12:39 crc kubenswrapper[4897]: I0228 14:12:39.250718 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sk8p" event={"ID":"75d37307-e756-40f6-aa6f-c017f393e1ba","Type":"ContainerStarted","Data":"55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252"} Feb 28 14:12:41 crc kubenswrapper[4897]: I0228 14:12:41.276438 4897 generic.go:334] "Generic (PLEG): container finished" podID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerID="55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252" exitCode=0 Feb 28 14:12:41 crc kubenswrapper[4897]: I0228 14:12:41.276518 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sk8p" event={"ID":"75d37307-e756-40f6-aa6f-c017f393e1ba","Type":"ContainerDied","Data":"55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252"} Feb 28 14:12:42 crc kubenswrapper[4897]: I0228 14:12:42.295433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sk8p" event={"ID":"75d37307-e756-40f6-aa6f-c017f393e1ba","Type":"ContainerStarted","Data":"cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8"} Feb 28 14:12:42 crc kubenswrapper[4897]: I0228 14:12:42.496251 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7sk8p" podStartSLOduration=2.9845284960000003 podStartE2EDuration="6.496230699s" podCreationTimestamp="2026-02-28 14:12:36 +0000 UTC" firstStartedPulling="2026-02-28 14:12:38.239195602 +0000 UTC m=+3372.481516269" lastFinishedPulling="2026-02-28 14:12:41.750897775 +0000 UTC m=+3375.993218472" observedRunningTime="2026-02-28 14:12:42.337141033 +0000 UTC m=+3376.579461700" watchObservedRunningTime="2026-02-28 14:12:42.496230699 +0000 UTC m=+3376.738551366" Feb 28 14:12:43 crc kubenswrapper[4897]: E0228 14:12:43.170443 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 14:12:43 crc kubenswrapper[4897]: E0228 14:12:43.170837 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmmzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f8qxx_openshift-marketplace(497308c2-3fca-4543-a22d-4ac840155887): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:12:43 crc kubenswrapper[4897]: E0228 14:12:43.172082 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-f8qxx" podUID="497308c2-3fca-4543-a22d-4ac840155887" Feb 28 14:12:46 crc kubenswrapper[4897]: I0228 14:12:46.607664 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:46 crc kubenswrapper[4897]: I0228 14:12:46.607916 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:46 crc kubenswrapper[4897]: I0228 14:12:46.660549 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:47 crc kubenswrapper[4897]: I0228 14:12:47.440084 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:47 crc kubenswrapper[4897]: I0228 14:12:47.525221 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sk8p"] Feb 28 14:12:49 crc kubenswrapper[4897]: I0228 14:12:49.372985 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7sk8p" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerName="registry-server" containerID="cri-o://cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8" gracePeriod=2 Feb 28 14:12:49 crc kubenswrapper[4897]: I0228 14:12:49.944442 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.103808 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-utilities\") pod \"75d37307-e756-40f6-aa6f-c017f393e1ba\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.104098 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-catalog-content\") pod \"75d37307-e756-40f6-aa6f-c017f393e1ba\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.104215 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n52k9\" (UniqueName: \"kubernetes.io/projected/75d37307-e756-40f6-aa6f-c017f393e1ba-kube-api-access-n52k9\") pod \"75d37307-e756-40f6-aa6f-c017f393e1ba\" (UID: \"75d37307-e756-40f6-aa6f-c017f393e1ba\") " Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.106347 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-utilities" (OuterVolumeSpecName: "utilities") pod "75d37307-e756-40f6-aa6f-c017f393e1ba" (UID: "75d37307-e756-40f6-aa6f-c017f393e1ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.114274 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d37307-e756-40f6-aa6f-c017f393e1ba-kube-api-access-n52k9" (OuterVolumeSpecName: "kube-api-access-n52k9") pod "75d37307-e756-40f6-aa6f-c017f393e1ba" (UID: "75d37307-e756-40f6-aa6f-c017f393e1ba"). InnerVolumeSpecName "kube-api-access-n52k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.176795 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75d37307-e756-40f6-aa6f-c017f393e1ba" (UID: "75d37307-e756-40f6-aa6f-c017f393e1ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.206876 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.206910 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n52k9\" (UniqueName: \"kubernetes.io/projected/75d37307-e756-40f6-aa6f-c017f393e1ba-kube-api-access-n52k9\") on node \"crc\" DevicePath \"\"" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.206923 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d37307-e756-40f6-aa6f-c017f393e1ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.391070 4897 generic.go:334] "Generic (PLEG): container finished" podID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerID="cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8" exitCode=0 Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.391137 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sk8p" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.391150 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sk8p" event={"ID":"75d37307-e756-40f6-aa6f-c017f393e1ba","Type":"ContainerDied","Data":"cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8"} Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.391391 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sk8p" event={"ID":"75d37307-e756-40f6-aa6f-c017f393e1ba","Type":"ContainerDied","Data":"0b746f5cd823c9d7c40522ce5686704720df42b935af04b6577355a12929f402"} Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.391435 4897 scope.go:117] "RemoveContainer" containerID="cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.438984 4897 scope.go:117] "RemoveContainer" containerID="55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.481957 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sk8p"] Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.486872 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7sk8p"] Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.495765 4897 scope.go:117] "RemoveContainer" containerID="aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.547160 4897 scope.go:117] "RemoveContainer" containerID="cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8" Feb 28 14:12:50 crc kubenswrapper[4897]: E0228 14:12:50.547762 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8\": container with ID starting with cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8 not found: ID does not exist" containerID="cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.547811 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8"} err="failed to get container status \"cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8\": rpc error: code = NotFound desc = could not find container \"cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8\": container with ID starting with cef5ea0aaf27631137e95338b233ef2fda05b7f5fdac7c9b8eb71e5ae7b997e8 not found: ID does not exist" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.547845 4897 scope.go:117] "RemoveContainer" containerID="55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252" Feb 28 14:12:50 crc kubenswrapper[4897]: E0228 14:12:50.548440 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252\": container with ID starting with 55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252 not found: ID does not exist" containerID="55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.548467 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252"} err="failed to get container status \"55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252\": rpc error: code = NotFound desc = could not find container \"55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252\": container with ID starting with 55f4ae1d4ef1b7052f3f070f23b343cee85d1650cc1e2e55fdbc7fb874dcc252 not found: ID does not exist" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.548487 4897 scope.go:117] "RemoveContainer" containerID="aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b" Feb 28 14:12:50 crc kubenswrapper[4897]: E0228 14:12:50.550103 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b\": container with ID starting with aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b not found: ID does not exist" containerID="aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b" Feb 28 14:12:50 crc kubenswrapper[4897]: I0228 14:12:50.550130 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b"} err="failed to get container status \"aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b\": rpc error: code = NotFound desc = could not find container \"aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b\": container with ID starting with aba527c21a4f5689d0508bc6d84b927beab2f622d866f7b941290dcd723e185b not found: ID does not exist" Feb 28 14:12:52 crc kubenswrapper[4897]: I0228 14:12:52.478456 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" path="/var/lib/kubelet/pods/75d37307-e756-40f6-aa6f-c017f393e1ba/volumes" Feb 28 14:12:55 crc kubenswrapper[4897]: E0228 14:12:55.459876 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f8qxx" podUID="497308c2-3fca-4543-a22d-4ac840155887" Feb 28 14:13:07 crc kubenswrapper[4897]: I0228 14:13:07.632489 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8qxx" event={"ID":"497308c2-3fca-4543-a22d-4ac840155887","Type":"ContainerStarted","Data":"e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd"} Feb 28 14:13:12 crc kubenswrapper[4897]: I0228 14:13:12.691689 4897 generic.go:334] "Generic (PLEG): container finished" podID="497308c2-3fca-4543-a22d-4ac840155887" containerID="e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd" exitCode=0 Feb 28 14:13:12 crc kubenswrapper[4897]: I0228 14:13:12.691849 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8qxx" event={"ID":"497308c2-3fca-4543-a22d-4ac840155887","Type":"ContainerDied","Data":"e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd"} Feb 28 14:13:13 crc kubenswrapper[4897]: I0228 14:13:13.704105 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8qxx" event={"ID":"497308c2-3fca-4543-a22d-4ac840155887","Type":"ContainerStarted","Data":"9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5"} Feb 28 14:13:13 crc kubenswrapper[4897]: I0228 14:13:13.726075 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f8qxx" podStartSLOduration=2.7742789759999997 podStartE2EDuration="44.726057817s" podCreationTimestamp="2026-02-28 14:12:29 +0000 UTC" firstStartedPulling="2026-02-28 14:12:31.13995881 +0000 UTC m=+3365.382279507" lastFinishedPulling="2026-02-28 14:13:13.091737701 +0000 UTC m=+3407.334058348" observedRunningTime="2026-02-28 14:13:13.721985341 +0000 UTC m=+3407.964306008" watchObservedRunningTime="2026-02-28 14:13:13.726057817 +0000 UTC m=+3407.968378474" Feb 28 14:13:19 crc kubenswrapper[4897]: I0228 14:13:19.640568 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:13:19 crc kubenswrapper[4897]: I0228 14:13:19.641239 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:13:20 crc kubenswrapper[4897]: I0228 14:13:20.704713 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f8qxx" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="registry-server" probeResult="failure" output=< Feb 28 14:13:20 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:13:20 crc kubenswrapper[4897]: > Feb 28 14:13:29 crc kubenswrapper[4897]: I0228 14:13:29.709513 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:13:29 crc kubenswrapper[4897]: I0228 14:13:29.783343 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:13:30 crc kubenswrapper[4897]: I0228 14:13:30.499471 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f8qxx"] Feb 28 14:13:30 crc kubenswrapper[4897]: I0228 14:13:30.899146 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f8qxx" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="registry-server" containerID="cri-o://9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5" gracePeriod=2 Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.537304 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.550150 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-utilities\") pod \"497308c2-3fca-4543-a22d-4ac840155887\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.550338 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmmzr\" (UniqueName: \"kubernetes.io/projected/497308c2-3fca-4543-a22d-4ac840155887-kube-api-access-zmmzr\") pod \"497308c2-3fca-4543-a22d-4ac840155887\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.550474 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-catalog-content\") pod \"497308c2-3fca-4543-a22d-4ac840155887\" (UID: \"497308c2-3fca-4543-a22d-4ac840155887\") " Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.551610 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-utilities" (OuterVolumeSpecName: "utilities") pod "497308c2-3fca-4543-a22d-4ac840155887" (UID: "497308c2-3fca-4543-a22d-4ac840155887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.574606 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497308c2-3fca-4543-a22d-4ac840155887-kube-api-access-zmmzr" (OuterVolumeSpecName: "kube-api-access-zmmzr") pod "497308c2-3fca-4543-a22d-4ac840155887" (UID: "497308c2-3fca-4543-a22d-4ac840155887"). InnerVolumeSpecName "kube-api-access-zmmzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.653126 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmmzr\" (UniqueName: \"kubernetes.io/projected/497308c2-3fca-4543-a22d-4ac840155887-kube-api-access-zmmzr\") on node \"crc\" DevicePath \"\"" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.653386 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.733646 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "497308c2-3fca-4543-a22d-4ac840155887" (UID: "497308c2-3fca-4543-a22d-4ac840155887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.755045 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497308c2-3fca-4543-a22d-4ac840155887-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.911941 4897 generic.go:334] "Generic (PLEG): container finished" podID="497308c2-3fca-4543-a22d-4ac840155887" containerID="9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5" exitCode=0 Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.912006 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f8qxx" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.912039 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8qxx" event={"ID":"497308c2-3fca-4543-a22d-4ac840155887","Type":"ContainerDied","Data":"9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5"} Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.912910 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f8qxx" event={"ID":"497308c2-3fca-4543-a22d-4ac840155887","Type":"ContainerDied","Data":"4a8e91f9675338004659108d0a0c932cec9231d18064c10849bdea698affa621"} Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.912952 4897 scope.go:117] "RemoveContainer" containerID="9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.955874 4897 scope.go:117] "RemoveContainer" containerID="e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd" Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.961284 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f8qxx"] Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.970180 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f8qxx"] Feb 28 14:13:31 crc kubenswrapper[4897]: I0228 14:13:31.981026 4897 scope.go:117] "RemoveContainer" containerID="b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282" Feb 28 14:13:32 crc kubenswrapper[4897]: I0228 14:13:32.032848 4897 scope.go:117] "RemoveContainer" containerID="9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5" Feb 28 14:13:32 crc kubenswrapper[4897]: E0228 14:13:32.033342 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5\": container with ID starting with 9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5 not found: ID does not exist" containerID="9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5" Feb 28 14:13:32 crc kubenswrapper[4897]: I0228 14:13:32.033377 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5"} err="failed to get container status \"9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5\": rpc error: code = NotFound desc = could not find container \"9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5\": container with ID starting with 9ef358b24e4fe44d90c388983877c065fa71d19e82f00afcb8ea55e252e1f8a5 not found: ID does not exist" Feb 28 14:13:32 crc kubenswrapper[4897]: I0228 14:13:32.033397 4897 scope.go:117] "RemoveContainer" containerID="e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd" Feb 28 14:13:32 crc kubenswrapper[4897]: E0228 14:13:32.033830 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd\": container with ID starting with e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd not found: ID does not exist" containerID="e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd" Feb 28 14:13:32 crc kubenswrapper[4897]: I0228 14:13:32.033866 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd"} err="failed to get container status \"e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd\": rpc error: code = NotFound desc = could not find container \"e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd\": container with ID starting with e3c6002689c6ad6f54bac1242616b5a0d5aa886dd8ce03f62737e94e2b3acccd not found: ID does not exist" Feb 28 14:13:32 crc kubenswrapper[4897]: I0228 14:13:32.033891 4897 scope.go:117] "RemoveContainer" containerID="b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282" Feb 28 14:13:32 crc kubenswrapper[4897]: E0228 14:13:32.034323 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282\": container with ID starting with b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282 not found: ID does not exist" containerID="b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282" Feb 28 14:13:32 crc kubenswrapper[4897]: I0228 14:13:32.034361 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282"} err="failed to get container status \"b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282\": rpc error: code = NotFound desc = could not find container \"b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282\": container with ID starting with b22acc6ac82910c0aae8403e3c3a359808da9d8a766861f1ab804b5d57adc282 not found: ID does not exist" Feb 28 14:13:32 crc kubenswrapper[4897]: I0228 14:13:32.472884 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="497308c2-3fca-4543-a22d-4ac840155887" path="/var/lib/kubelet/pods/497308c2-3fca-4543-a22d-4ac840155887/volumes" Feb 28 14:13:33 crc kubenswrapper[4897]: I0228 14:13:33.370653 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:13:33 crc kubenswrapper[4897]: I0228 14:13:33.370929 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.179400 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538134-vrqtw"] Feb 28 14:14:00 crc kubenswrapper[4897]: E0228 14:14:00.180664 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="extract-utilities" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.180690 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="extract-utilities" Feb 28 14:14:00 crc kubenswrapper[4897]: E0228 14:14:00.180719 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="extract-content" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.180732 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="extract-content" Feb 28 14:14:00 crc kubenswrapper[4897]: E0228 14:14:00.180770 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerName="registry-server" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.180783 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerName="registry-server" Feb 28 14:14:00 crc kubenswrapper[4897]: E0228 14:14:00.180810 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerName="extract-content" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.180822 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerName="extract-content" Feb 28 14:14:00 crc kubenswrapper[4897]: E0228 14:14:00.180855 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="registry-server" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.180867 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="registry-server" Feb 28 14:14:00 crc kubenswrapper[4897]: E0228 14:14:00.180896 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerName="extract-utilities" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.180908 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerName="extract-utilities" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.181261 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d37307-e756-40f6-aa6f-c017f393e1ba" containerName="registry-server" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.181388 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="497308c2-3fca-4543-a22d-4ac840155887" containerName="registry-server" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.182810 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538134-vrqtw" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.186056 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.186682 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.190655 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.206411 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538134-vrqtw"] Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.267695 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95qqm\" (UniqueName: \"kubernetes.io/projected/e951afc8-3d1f-41d1-8efc-5cb2a7713b89-kube-api-access-95qqm\") pod \"auto-csr-approver-29538134-vrqtw\" (UID: \"e951afc8-3d1f-41d1-8efc-5cb2a7713b89\") " pod="openshift-infra/auto-csr-approver-29538134-vrqtw" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.368564 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95qqm\" (UniqueName: \"kubernetes.io/projected/e951afc8-3d1f-41d1-8efc-5cb2a7713b89-kube-api-access-95qqm\") pod \"auto-csr-approver-29538134-vrqtw\" (UID: \"e951afc8-3d1f-41d1-8efc-5cb2a7713b89\") " pod="openshift-infra/auto-csr-approver-29538134-vrqtw" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.393545 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95qqm\" (UniqueName: \"kubernetes.io/projected/e951afc8-3d1f-41d1-8efc-5cb2a7713b89-kube-api-access-95qqm\") pod \"auto-csr-approver-29538134-vrqtw\" (UID: \"e951afc8-3d1f-41d1-8efc-5cb2a7713b89\") " pod="openshift-infra/auto-csr-approver-29538134-vrqtw" Feb 28 14:14:00 crc kubenswrapper[4897]: I0228 14:14:00.522384 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538134-vrqtw" Feb 28 14:14:01 crc kubenswrapper[4897]: I0228 14:14:01.064253 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538134-vrqtw"] Feb 28 14:14:01 crc kubenswrapper[4897]: I0228 14:14:01.235145 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538134-vrqtw" event={"ID":"e951afc8-3d1f-41d1-8efc-5cb2a7713b89","Type":"ContainerStarted","Data":"d32eb21ed7b41f6739b4d9d179e1504d71e4cfb01566a28479ac8b8b032d13d3"} Feb 28 14:14:03 crc kubenswrapper[4897]: I0228 14:14:03.278147 4897 generic.go:334] "Generic (PLEG): container finished" podID="e951afc8-3d1f-41d1-8efc-5cb2a7713b89" containerID="d8a2b058801c35c0ff4b569521228af8aac3410ed773e221110367fca80ef980" exitCode=0 Feb 28 14:14:03 crc kubenswrapper[4897]: I0228 14:14:03.278239 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538134-vrqtw" event={"ID":"e951afc8-3d1f-41d1-8efc-5cb2a7713b89","Type":"ContainerDied","Data":"d8a2b058801c35c0ff4b569521228af8aac3410ed773e221110367fca80ef980"} Feb 28 14:14:03 crc kubenswrapper[4897]: I0228 14:14:03.371490 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:14:03 crc kubenswrapper[4897]: I0228 14:14:03.371591 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:14:04 crc kubenswrapper[4897]: I0228 14:14:04.835991 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538134-vrqtw" Feb 28 14:14:04 crc kubenswrapper[4897]: I0228 14:14:04.871747 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95qqm\" (UniqueName: \"kubernetes.io/projected/e951afc8-3d1f-41d1-8efc-5cb2a7713b89-kube-api-access-95qqm\") pod \"e951afc8-3d1f-41d1-8efc-5cb2a7713b89\" (UID: \"e951afc8-3d1f-41d1-8efc-5cb2a7713b89\") " Feb 28 14:14:04 crc kubenswrapper[4897]: I0228 14:14:04.881587 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e951afc8-3d1f-41d1-8efc-5cb2a7713b89-kube-api-access-95qqm" (OuterVolumeSpecName: "kube-api-access-95qqm") pod "e951afc8-3d1f-41d1-8efc-5cb2a7713b89" (UID: "e951afc8-3d1f-41d1-8efc-5cb2a7713b89"). InnerVolumeSpecName "kube-api-access-95qqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:14:04 crc kubenswrapper[4897]: I0228 14:14:04.974599 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95qqm\" (UniqueName: \"kubernetes.io/projected/e951afc8-3d1f-41d1-8efc-5cb2a7713b89-kube-api-access-95qqm\") on node \"crc\" DevicePath \"\"" Feb 28 14:14:05 crc kubenswrapper[4897]: I0228 14:14:05.301206 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538134-vrqtw" event={"ID":"e951afc8-3d1f-41d1-8efc-5cb2a7713b89","Type":"ContainerDied","Data":"d32eb21ed7b41f6739b4d9d179e1504d71e4cfb01566a28479ac8b8b032d13d3"} Feb 28 14:14:05 crc kubenswrapper[4897]: I0228 14:14:05.301260 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32eb21ed7b41f6739b4d9d179e1504d71e4cfb01566a28479ac8b8b032d13d3" Feb 28 14:14:05 crc kubenswrapper[4897]: I0228 14:14:05.301367 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538134-vrqtw" Feb 28 14:14:05 crc kubenswrapper[4897]: I0228 14:14:05.929885 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538128-6p2kf"] Feb 28 14:14:05 crc kubenswrapper[4897]: I0228 14:14:05.944401 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538128-6p2kf"] Feb 28 14:14:06 crc kubenswrapper[4897]: I0228 14:14:06.488933 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c071db2-5764-4d17-a5cb-1f0f7f54c4fb" path="/var/lib/kubelet/pods/9c071db2-5764-4d17-a5cb-1f0f7f54c4fb/volumes" Feb 28 14:14:33 crc kubenswrapper[4897]: I0228 14:14:33.371045 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:14:33 crc kubenswrapper[4897]: I0228 14:14:33.371652 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:14:33 crc kubenswrapper[4897]: I0228 14:14:33.371704 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:14:33 crc kubenswrapper[4897]: I0228 14:14:33.372589 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46622fdc81b8121032a4f67fbad65c2518e46caf160555ea308231319df04528"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:14:33 crc kubenswrapper[4897]: I0228 14:14:33.372650 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://46622fdc81b8121032a4f67fbad65c2518e46caf160555ea308231319df04528" gracePeriod=600 Feb 28 14:14:33 crc kubenswrapper[4897]: I0228 14:14:33.646119 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="46622fdc81b8121032a4f67fbad65c2518e46caf160555ea308231319df04528" exitCode=0 Feb 28 14:14:33 crc kubenswrapper[4897]: I0228 14:14:33.646164 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"46622fdc81b8121032a4f67fbad65c2518e46caf160555ea308231319df04528"} Feb 28 14:14:33 crc kubenswrapper[4897]: I0228 14:14:33.646204 4897 scope.go:117] "RemoveContainer" containerID="a565d5be8c3d16a7dd1744beb9c89934975eb5a241878dfe53fef3d98e1be2d2" Feb 28 14:14:34 crc kubenswrapper[4897]: I0228 14:14:34.662982 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106"} Feb 28 14:14:47 crc kubenswrapper[4897]: I0228 14:14:47.811508 4897 scope.go:117] "RemoveContainer" containerID="af106e6247fae6eaf4c646f0231c60ae49083f9e3c388a9176bf7fae4482142f" Feb 28 14:14:47 crc kubenswrapper[4897]: I0228 14:14:47.906977 4897 scope.go:117] "RemoveContainer" containerID="f72fec8556de747b4d21b389e5f069a8648f1d514159d0b3f73d99a62834e132" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.197894 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl"] Feb 28 14:15:00 crc kubenswrapper[4897]: E0228 14:15:00.198862 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e951afc8-3d1f-41d1-8efc-5cb2a7713b89" containerName="oc" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.198877 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e951afc8-3d1f-41d1-8efc-5cb2a7713b89" containerName="oc" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.199187 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e951afc8-3d1f-41d1-8efc-5cb2a7713b89" containerName="oc" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.201765 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl"] Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.201857 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.211802 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.212066 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.366574 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efa1812-7587-40b8-8577-769f9208e820-config-volume\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.366710 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efa1812-7587-40b8-8577-769f9208e820-secret-volume\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.367201 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xj59\" (UniqueName: \"kubernetes.io/projected/0efa1812-7587-40b8-8577-769f9208e820-kube-api-access-8xj59\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.469060 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xj59\" (UniqueName: \"kubernetes.io/projected/0efa1812-7587-40b8-8577-769f9208e820-kube-api-access-8xj59\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.469175 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efa1812-7587-40b8-8577-769f9208e820-config-volume\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.469271 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efa1812-7587-40b8-8577-769f9208e820-secret-volume\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.471080 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efa1812-7587-40b8-8577-769f9208e820-config-volume\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.481772 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efa1812-7587-40b8-8577-769f9208e820-secret-volume\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.497575 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xj59\" (UniqueName: \"kubernetes.io/projected/0efa1812-7587-40b8-8577-769f9208e820-kube-api-access-8xj59\") pod \"collect-profiles-29538135-crxwl\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.528574 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.835743 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl"] Feb 28 14:15:00 crc kubenswrapper[4897]: I0228 14:15:00.997296 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" event={"ID":"0efa1812-7587-40b8-8577-769f9208e820","Type":"ContainerStarted","Data":"b182017fc3504cd2ac91bee217335b8787087b9df2cfd9bb9a33594e21262175"} Feb 28 14:15:02 crc kubenswrapper[4897]: I0228 14:15:02.009253 4897 generic.go:334] "Generic (PLEG): container finished" podID="0efa1812-7587-40b8-8577-769f9208e820" containerID="e96c9adfe5573b24f429a481178ee76850a7a701fe7e503c7dac101fbe0ece46" exitCode=0 Feb 28 14:15:02 crc kubenswrapper[4897]: I0228 14:15:02.009373 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" event={"ID":"0efa1812-7587-40b8-8577-769f9208e820","Type":"ContainerDied","Data":"e96c9adfe5573b24f429a481178ee76850a7a701fe7e503c7dac101fbe0ece46"} Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.480581 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.645277 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efa1812-7587-40b8-8577-769f9208e820-secret-volume\") pod \"0efa1812-7587-40b8-8577-769f9208e820\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.645411 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efa1812-7587-40b8-8577-769f9208e820-config-volume\") pod \"0efa1812-7587-40b8-8577-769f9208e820\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.645577 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xj59\" (UniqueName: \"kubernetes.io/projected/0efa1812-7587-40b8-8577-769f9208e820-kube-api-access-8xj59\") pod \"0efa1812-7587-40b8-8577-769f9208e820\" (UID: \"0efa1812-7587-40b8-8577-769f9208e820\") " Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.646538 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0efa1812-7587-40b8-8577-769f9208e820-config-volume" (OuterVolumeSpecName: "config-volume") pod "0efa1812-7587-40b8-8577-769f9208e820" (UID: "0efa1812-7587-40b8-8577-769f9208e820"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.647568 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efa1812-7587-40b8-8577-769f9208e820-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.652111 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efa1812-7587-40b8-8577-769f9208e820-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0efa1812-7587-40b8-8577-769f9208e820" (UID: "0efa1812-7587-40b8-8577-769f9208e820"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.655093 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efa1812-7587-40b8-8577-769f9208e820-kube-api-access-8xj59" (OuterVolumeSpecName: "kube-api-access-8xj59") pod "0efa1812-7587-40b8-8577-769f9208e820" (UID: "0efa1812-7587-40b8-8577-769f9208e820"). InnerVolumeSpecName "kube-api-access-8xj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.750073 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xj59\" (UniqueName: \"kubernetes.io/projected/0efa1812-7587-40b8-8577-769f9208e820-kube-api-access-8xj59\") on node \"crc\" DevicePath \"\"" Feb 28 14:15:03 crc kubenswrapper[4897]: I0228 14:15:03.750113 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efa1812-7587-40b8-8577-769f9208e820-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 14:15:04 crc kubenswrapper[4897]: I0228 14:15:04.034142 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" event={"ID":"0efa1812-7587-40b8-8577-769f9208e820","Type":"ContainerDied","Data":"b182017fc3504cd2ac91bee217335b8787087b9df2cfd9bb9a33594e21262175"} Feb 28 14:15:04 crc kubenswrapper[4897]: I0228 14:15:04.034184 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b182017fc3504cd2ac91bee217335b8787087b9df2cfd9bb9a33594e21262175" Feb 28 14:15:04 crc kubenswrapper[4897]: I0228 14:15:04.034217 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl" Feb 28 14:15:04 crc kubenswrapper[4897]: I0228 14:15:04.581274 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv"] Feb 28 14:15:04 crc kubenswrapper[4897]: I0228 14:15:04.590905 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538090-d5qhv"] Feb 28 14:15:06 crc kubenswrapper[4897]: I0228 14:15:06.478182 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f02d7df-23c0-449f-91a8-29e7e2ee7775" path="/var/lib/kubelet/pods/4f02d7df-23c0-449f-91a8-29e7e2ee7775/volumes" Feb 28 14:15:48 crc kubenswrapper[4897]: I0228 14:15:48.028183 4897 scope.go:117] "RemoveContainer" containerID="7a7c3911f2c74a15fcc8d8ab2a06e00ae0633ffeb5d89b6a7e29def3057aac4c" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.179659 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538136-lbgb2"] Feb 28 14:16:00 crc kubenswrapper[4897]: E0228 14:16:00.180961 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efa1812-7587-40b8-8577-769f9208e820" containerName="collect-profiles" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.181019 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efa1812-7587-40b8-8577-769f9208e820" containerName="collect-profiles" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.181434 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efa1812-7587-40b8-8577-769f9208e820" containerName="collect-profiles" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.182668 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.191431 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.191616 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.191653 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.217443 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538136-lbgb2"] Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.290953 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsr7k\" (UniqueName: \"kubernetes.io/projected/669b6241-8415-4133-a6a6-382fe77c0aa9-kube-api-access-xsr7k\") pod \"auto-csr-approver-29538136-lbgb2\" (UID: \"669b6241-8415-4133-a6a6-382fe77c0aa9\") " pod="openshift-infra/auto-csr-approver-29538136-lbgb2" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.393921 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsr7k\" (UniqueName: \"kubernetes.io/projected/669b6241-8415-4133-a6a6-382fe77c0aa9-kube-api-access-xsr7k\") pod \"auto-csr-approver-29538136-lbgb2\" (UID: \"669b6241-8415-4133-a6a6-382fe77c0aa9\") " pod="openshift-infra/auto-csr-approver-29538136-lbgb2" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.436450 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsr7k\" (UniqueName: \"kubernetes.io/projected/669b6241-8415-4133-a6a6-382fe77c0aa9-kube-api-access-xsr7k\") pod \"auto-csr-approver-29538136-lbgb2\" (UID: \"669b6241-8415-4133-a6a6-382fe77c0aa9\") " pod="openshift-infra/auto-csr-approver-29538136-lbgb2" Feb 28 14:16:00 crc kubenswrapper[4897]: I0228 14:16:00.512968 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" Feb 28 14:16:01 crc kubenswrapper[4897]: I0228 14:16:01.057162 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538136-lbgb2"] Feb 28 14:16:01 crc kubenswrapper[4897]: I0228 14:16:01.744051 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" event={"ID":"669b6241-8415-4133-a6a6-382fe77c0aa9","Type":"ContainerStarted","Data":"76bc40afb1765ef45cc0561f0cc586077948589816a5429b9f10d35e547dc2ac"} Feb 28 14:16:02 crc kubenswrapper[4897]: I0228 14:16:02.751929 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" event={"ID":"669b6241-8415-4133-a6a6-382fe77c0aa9","Type":"ContainerStarted","Data":"84d9aec33f4421010c9a51e9295a8dca933ee89ea8bc866b34a7ffaafa69cf44"} Feb 28 14:16:02 crc kubenswrapper[4897]: I0228 14:16:02.782466 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" podStartSLOduration=1.621958131 podStartE2EDuration="2.782449238s" podCreationTimestamp="2026-02-28 14:16:00 +0000 UTC" firstStartedPulling="2026-02-28 14:16:01.062911807 +0000 UTC m=+3575.305232494" lastFinishedPulling="2026-02-28 14:16:02.223402904 +0000 UTC m=+3576.465723601" observedRunningTime="2026-02-28 14:16:02.774171998 +0000 UTC m=+3577.016492655" watchObservedRunningTime="2026-02-28 14:16:02.782449238 +0000 UTC m=+3577.024769895" Feb 28 14:16:03 crc kubenswrapper[4897]: I0228 14:16:03.767717 4897 generic.go:334] "Generic (PLEG): container finished" podID="669b6241-8415-4133-a6a6-382fe77c0aa9" containerID="84d9aec33f4421010c9a51e9295a8dca933ee89ea8bc866b34a7ffaafa69cf44" exitCode=0 Feb 28 14:16:03 crc kubenswrapper[4897]: I0228 14:16:03.767788 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" event={"ID":"669b6241-8415-4133-a6a6-382fe77c0aa9","Type":"ContainerDied","Data":"84d9aec33f4421010c9a51e9295a8dca933ee89ea8bc866b34a7ffaafa69cf44"} Feb 28 14:16:05 crc kubenswrapper[4897]: I0228 14:16:05.208005 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" Feb 28 14:16:05 crc kubenswrapper[4897]: I0228 14:16:05.228185 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsr7k\" (UniqueName: \"kubernetes.io/projected/669b6241-8415-4133-a6a6-382fe77c0aa9-kube-api-access-xsr7k\") pod \"669b6241-8415-4133-a6a6-382fe77c0aa9\" (UID: \"669b6241-8415-4133-a6a6-382fe77c0aa9\") " Feb 28 14:16:05 crc kubenswrapper[4897]: I0228 14:16:05.233737 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669b6241-8415-4133-a6a6-382fe77c0aa9-kube-api-access-xsr7k" (OuterVolumeSpecName: "kube-api-access-xsr7k") pod "669b6241-8415-4133-a6a6-382fe77c0aa9" (UID: "669b6241-8415-4133-a6a6-382fe77c0aa9"). InnerVolumeSpecName "kube-api-access-xsr7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:16:05 crc kubenswrapper[4897]: I0228 14:16:05.331902 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsr7k\" (UniqueName: \"kubernetes.io/projected/669b6241-8415-4133-a6a6-382fe77c0aa9-kube-api-access-xsr7k\") on node \"crc\" DevicePath \"\"" Feb 28 14:16:05 crc kubenswrapper[4897]: I0228 14:16:05.795363 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" event={"ID":"669b6241-8415-4133-a6a6-382fe77c0aa9","Type":"ContainerDied","Data":"76bc40afb1765ef45cc0561f0cc586077948589816a5429b9f10d35e547dc2ac"} Feb 28 14:16:05 crc kubenswrapper[4897]: I0228 14:16:05.795689 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76bc40afb1765ef45cc0561f0cc586077948589816a5429b9f10d35e547dc2ac" Feb 28 14:16:05 crc kubenswrapper[4897]: I0228 14:16:05.795455 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538136-lbgb2" Feb 28 14:16:06 crc kubenswrapper[4897]: I0228 14:16:06.295200 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538130-jm5gv"] Feb 28 14:16:06 crc kubenswrapper[4897]: I0228 14:16:06.309537 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538130-jm5gv"] Feb 28 14:16:06 crc kubenswrapper[4897]: I0228 14:16:06.494408 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606897b8-41d5-4034-92c2-bf0c7423d0ac" path="/var/lib/kubelet/pods/606897b8-41d5-4034-92c2-bf0c7423d0ac/volumes" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.753851 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7fhc"] Feb 28 14:16:11 crc kubenswrapper[4897]: E0228 14:16:11.754904 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669b6241-8415-4133-a6a6-382fe77c0aa9" containerName="oc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.754919 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="669b6241-8415-4133-a6a6-382fe77c0aa9" containerName="oc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.755191 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="669b6241-8415-4133-a6a6-382fe77c0aa9" containerName="oc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.757078 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.769082 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7fhc"] Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.874972 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8c8m\" (UniqueName: \"kubernetes.io/projected/eda763ad-fc59-4a0e-9cd0-58521b82eb34-kube-api-access-x8c8m\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.875057 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-utilities\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.875253 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-catalog-content\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.977125 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8c8m\" (UniqueName: \"kubernetes.io/projected/eda763ad-fc59-4a0e-9cd0-58521b82eb34-kube-api-access-x8c8m\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.977198 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-utilities\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.977330 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-catalog-content\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.977741 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-utilities\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:11 crc kubenswrapper[4897]: I0228 14:16:11.977781 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-catalog-content\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:12 crc kubenswrapper[4897]: I0228 14:16:12.004683 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8c8m\" (UniqueName: \"kubernetes.io/projected/eda763ad-fc59-4a0e-9cd0-58521b82eb34-kube-api-access-x8c8m\") pod \"certified-operators-r7fhc\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:12 crc kubenswrapper[4897]: I0228 14:16:12.081603 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:16:12 crc kubenswrapper[4897]: I0228 14:16:12.583114 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7fhc"] Feb 28 14:16:12 crc kubenswrapper[4897]: I0228 14:16:12.873933 4897 generic.go:334] "Generic (PLEG): container finished" podID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerID="928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7" exitCode=0 Feb 28 14:16:12 crc kubenswrapper[4897]: I0228 14:16:12.873975 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7fhc" event={"ID":"eda763ad-fc59-4a0e-9cd0-58521b82eb34","Type":"ContainerDied","Data":"928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7"} Feb 28 14:16:12 crc kubenswrapper[4897]: I0228 14:16:12.874016 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7fhc" event={"ID":"eda763ad-fc59-4a0e-9cd0-58521b82eb34","Type":"ContainerStarted","Data":"5f04902e62b7030e7752dadf9e8489034f57786fdd1deef20d4205473cfa1dbd"} Feb 28 14:16:13 crc kubenswrapper[4897]: E0228 14:16:13.387792 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 14:16:13 crc kubenswrapper[4897]: E0228 14:16:13.388270 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8c8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r7fhc_openshift-marketplace(eda763ad-fc59-4a0e-9cd0-58521b82eb34): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:16:13 crc kubenswrapper[4897]: E0228 14:16:13.389470 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" Feb 28 14:16:13 crc kubenswrapper[4897]: E0228 14:16:13.902544 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" Feb 28 14:16:26 crc kubenswrapper[4897]: E0228 14:16:26.012060 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 14:16:26 crc kubenswrapper[4897]: E0228 14:16:26.012918 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8c8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r7fhc_openshift-marketplace(eda763ad-fc59-4a0e-9cd0-58521b82eb34): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:16:26 crc kubenswrapper[4897]: E0228 14:16:26.014443 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" Feb 28 14:16:33 crc kubenswrapper[4897]: I0228 14:16:33.371644 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:16:33 crc kubenswrapper[4897]: I0228 14:16:33.372258 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:16:40 crc kubenswrapper[4897]: E0228 14:16:40.460551 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" Feb 28 14:16:48 crc kubenswrapper[4897]: I0228 14:16:48.144955 4897 scope.go:117] "RemoveContainer" containerID="cbb827dd29c9c575beeba819dea9626b1c21340186a62028d32cf6f2476dbee4" Feb 28 14:16:52 crc kubenswrapper[4897]: E0228 14:16:52.113193 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 14:16:52 crc kubenswrapper[4897]: E0228 14:16:52.113808 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8c8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r7fhc_openshift-marketplace(eda763ad-fc59-4a0e-9cd0-58521b82eb34): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:16:52 crc kubenswrapper[4897]: E0228 14:16:52.114884 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" Feb 28 14:17:03 crc kubenswrapper[4897]: I0228 14:17:03.370482 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:17:03 crc kubenswrapper[4897]: I0228 14:17:03.371067 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:17:05 crc kubenswrapper[4897]: E0228 14:17:05.459076 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" Feb 28 14:17:18 crc kubenswrapper[4897]: E0228 14:17:18.460401 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" Feb 28 14:17:31 crc kubenswrapper[4897]: E0228 14:17:31.459512 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.371099 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.372339 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.372411 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.373417 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.373486 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" gracePeriod=600 Feb 28 14:17:33 crc kubenswrapper[4897]: E0228 14:17:33.502630 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.820613 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" exitCode=0 Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.820694 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106"} Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.820790 4897 scope.go:117] "RemoveContainer" containerID="46622fdc81b8121032a4f67fbad65c2518e46caf160555ea308231319df04528" Feb 28 14:17:33 crc kubenswrapper[4897]: I0228 14:17:33.821540 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:17:33 crc kubenswrapper[4897]: E0228 14:17:33.822169 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:17:46 crc kubenswrapper[4897]: I0228 14:17:46.465947 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:17:47 crc kubenswrapper[4897]: I0228 14:17:47.456778 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:17:47 crc kubenswrapper[4897]: E0228 14:17:47.457465 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:17:47 crc kubenswrapper[4897]: I0228 14:17:47.998937 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7fhc" event={"ID":"eda763ad-fc59-4a0e-9cd0-58521b82eb34","Type":"ContainerStarted","Data":"127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe"} Feb 28 14:17:50 crc kubenswrapper[4897]: I0228 14:17:50.028810 4897 generic.go:334] "Generic (PLEG): container finished" podID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerID="127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe" exitCode=0 Feb 28 14:17:50 crc kubenswrapper[4897]: I0228 14:17:50.029072 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7fhc" event={"ID":"eda763ad-fc59-4a0e-9cd0-58521b82eb34","Type":"ContainerDied","Data":"127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe"} Feb 28 14:17:51 crc kubenswrapper[4897]: I0228 14:17:51.053045 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7fhc" event={"ID":"eda763ad-fc59-4a0e-9cd0-58521b82eb34","Type":"ContainerStarted","Data":"e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c"} Feb 28 14:17:51 crc kubenswrapper[4897]: I0228 14:17:51.099876 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7fhc" podStartSLOduration=2.526502137 podStartE2EDuration="1m40.099848339s" podCreationTimestamp="2026-02-28 14:16:11 +0000 UTC" firstStartedPulling="2026-02-28 14:16:12.875831212 +0000 UTC m=+3587.118151869" lastFinishedPulling="2026-02-28 14:17:50.449177374 +0000 UTC m=+3684.691498071" observedRunningTime="2026-02-28 14:17:51.084991915 +0000 UTC m=+3685.327312602" watchObservedRunningTime="2026-02-28 14:17:51.099848339 +0000 UTC m=+3685.342169036" Feb 28 14:17:52 crc kubenswrapper[4897]: I0228 14:17:52.081682 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:17:52 crc kubenswrapper[4897]: I0228 14:17:52.082750 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:17:53 crc kubenswrapper[4897]: I0228 14:17:53.136430 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="registry-server" probeResult="failure" output=< Feb 28 14:17:53 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:17:53 crc kubenswrapper[4897]: > Feb 28 14:17:59 crc kubenswrapper[4897]: I0228 14:17:59.456718 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:17:59 crc kubenswrapper[4897]: E0228 14:17:59.458025 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.184795 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538138-69vxt"] Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.186547 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538138-69vxt" Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.190475 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.192611 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.193185 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.196842 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538138-69vxt"] Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.343631 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95s69\" (UniqueName: \"kubernetes.io/projected/c63ce165-1697-4ed8-9276-6fe97714e195-kube-api-access-95s69\") pod \"auto-csr-approver-29538138-69vxt\" (UID: \"c63ce165-1697-4ed8-9276-6fe97714e195\") " pod="openshift-infra/auto-csr-approver-29538138-69vxt" Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.447277 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95s69\" (UniqueName: \"kubernetes.io/projected/c63ce165-1697-4ed8-9276-6fe97714e195-kube-api-access-95s69\") pod \"auto-csr-approver-29538138-69vxt\" (UID: \"c63ce165-1697-4ed8-9276-6fe97714e195\") " pod="openshift-infra/auto-csr-approver-29538138-69vxt" Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.483566 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95s69\" (UniqueName: \"kubernetes.io/projected/c63ce165-1697-4ed8-9276-6fe97714e195-kube-api-access-95s69\") pod \"auto-csr-approver-29538138-69vxt\" (UID: \"c63ce165-1697-4ed8-9276-6fe97714e195\") " pod="openshift-infra/auto-csr-approver-29538138-69vxt" Feb 28 14:18:00 crc kubenswrapper[4897]: I0228 14:18:00.519196 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538138-69vxt" Feb 28 14:18:01 crc kubenswrapper[4897]: I0228 14:18:01.005551 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538138-69vxt"] Feb 28 14:18:01 crc kubenswrapper[4897]: W0228 14:18:01.009859 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc63ce165_1697_4ed8_9276_6fe97714e195.slice/crio-5cc72605ec6e95bd8179af10c04874175dff306d6c476648b7c6d06c68146c05 WatchSource:0}: Error finding container 5cc72605ec6e95bd8179af10c04874175dff306d6c476648b7c6d06c68146c05: Status 404 returned error can't find the container with id 5cc72605ec6e95bd8179af10c04874175dff306d6c476648b7c6d06c68146c05 Feb 28 14:18:01 crc kubenswrapper[4897]: I0228 14:18:01.188999 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538138-69vxt" event={"ID":"c63ce165-1697-4ed8-9276-6fe97714e195","Type":"ContainerStarted","Data":"5cc72605ec6e95bd8179af10c04874175dff306d6c476648b7c6d06c68146c05"} Feb 28 14:18:02 crc kubenswrapper[4897]: I0228 14:18:02.170051 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:18:02 crc kubenswrapper[4897]: I0228 14:18:02.204225 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538138-69vxt" event={"ID":"c63ce165-1697-4ed8-9276-6fe97714e195","Type":"ContainerStarted","Data":"8ee9b95f18cb5befe8acec206724e2c7cf7f98be6fa1db3522a250e86dbe3f0d"} Feb 28 14:18:02 crc kubenswrapper[4897]: I0228 14:18:02.233704 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538138-69vxt" podStartSLOduration=1.448591481 podStartE2EDuration="2.233684538s" podCreationTimestamp="2026-02-28 14:18:00 +0000 UTC" firstStartedPulling="2026-02-28 14:18:01.0128081 +0000 UTC m=+3695.255128787" lastFinishedPulling="2026-02-28 14:18:01.797901177 +0000 UTC m=+3696.040221844" observedRunningTime="2026-02-28 14:18:02.218911237 +0000 UTC m=+3696.461231934" watchObservedRunningTime="2026-02-28 14:18:02.233684538 +0000 UTC m=+3696.476005195" Feb 28 14:18:02 crc kubenswrapper[4897]: I0228 14:18:02.235594 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:18:02 crc kubenswrapper[4897]: I0228 14:18:02.411263 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7fhc"] Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.220508 4897 generic.go:334] "Generic (PLEG): container finished" podID="c63ce165-1697-4ed8-9276-6fe97714e195" containerID="8ee9b95f18cb5befe8acec206724e2c7cf7f98be6fa1db3522a250e86dbe3f0d" exitCode=0 Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.220580 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538138-69vxt" event={"ID":"c63ce165-1697-4ed8-9276-6fe97714e195","Type":"ContainerDied","Data":"8ee9b95f18cb5befe8acec206724e2c7cf7f98be6fa1db3522a250e86dbe3f0d"} Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.221243 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7fhc" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="registry-server" containerID="cri-o://e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c" gracePeriod=2 Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.706320 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.831609 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-catalog-content\") pod \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.831856 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-utilities\") pod \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.831904 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8c8m\" (UniqueName: \"kubernetes.io/projected/eda763ad-fc59-4a0e-9cd0-58521b82eb34-kube-api-access-x8c8m\") pod \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\" (UID: \"eda763ad-fc59-4a0e-9cd0-58521b82eb34\") " Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.834873 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-utilities" (OuterVolumeSpecName: "utilities") pod "eda763ad-fc59-4a0e-9cd0-58521b82eb34" (UID: "eda763ad-fc59-4a0e-9cd0-58521b82eb34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.840097 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda763ad-fc59-4a0e-9cd0-58521b82eb34-kube-api-access-x8c8m" (OuterVolumeSpecName: "kube-api-access-x8c8m") pod "eda763ad-fc59-4a0e-9cd0-58521b82eb34" (UID: "eda763ad-fc59-4a0e-9cd0-58521b82eb34"). InnerVolumeSpecName "kube-api-access-x8c8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.928456 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eda763ad-fc59-4a0e-9cd0-58521b82eb34" (UID: "eda763ad-fc59-4a0e-9cd0-58521b82eb34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.935498 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.935559 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8c8m\" (UniqueName: \"kubernetes.io/projected/eda763ad-fc59-4a0e-9cd0-58521b82eb34-kube-api-access-x8c8m\") on node \"crc\" DevicePath \"\"" Feb 28 14:18:03 crc kubenswrapper[4897]: I0228 14:18:03.935574 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda763ad-fc59-4a0e-9cd0-58521b82eb34-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.237909 4897 generic.go:334] "Generic (PLEG): container finished" podID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerID="e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c" exitCode=0 Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.238054 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7fhc" event={"ID":"eda763ad-fc59-4a0e-9cd0-58521b82eb34","Type":"ContainerDied","Data":"e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c"} Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.238087 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7fhc" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.238141 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7fhc" event={"ID":"eda763ad-fc59-4a0e-9cd0-58521b82eb34","Type":"ContainerDied","Data":"5f04902e62b7030e7752dadf9e8489034f57786fdd1deef20d4205473cfa1dbd"} Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.238173 4897 scope.go:117] "RemoveContainer" containerID="e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.290137 4897 scope.go:117] "RemoveContainer" containerID="127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.313708 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7fhc"] Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.327229 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7fhc"] Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.335852 4897 scope.go:117] "RemoveContainer" containerID="928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.433051 4897 scope.go:117] "RemoveContainer" containerID="e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c" Feb 28 14:18:04 crc kubenswrapper[4897]: E0228 14:18:04.433970 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c\": container with ID starting with e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c not found: ID does not exist" containerID="e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.434022 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c"} err="failed to get container status \"e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c\": rpc error: code = NotFound desc = could not find container \"e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c\": container with ID starting with e848731a12614c207a5613cc4c548551366390aa2c20d85b34dc3e73c29e829c not found: ID does not exist" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.434056 4897 scope.go:117] "RemoveContainer" containerID="127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe" Feb 28 14:18:04 crc kubenswrapper[4897]: E0228 14:18:04.434501 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe\": container with ID starting with 127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe not found: ID does not exist" containerID="127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.434556 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe"} err="failed to get container status \"127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe\": rpc error: code = NotFound desc = could not find container \"127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe\": container with ID starting with 127d5cac6934a1711dfc348cf317a221282c50fa081670f58e66e7ea5a28cebe not found: ID does not exist" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.434588 4897 scope.go:117] "RemoveContainer" containerID="928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7" Feb 28 14:18:04 crc kubenswrapper[4897]: E0228 14:18:04.434884 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7\": container with ID starting with 928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7 not found: ID does not exist" containerID="928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.434933 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7"} err="failed to get container status \"928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7\": rpc error: code = NotFound desc = could not find container \"928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7\": container with ID starting with 928373f5640ebe3908f196b08f209d88f1f9e86c3f2c6f0fe3ef384a9d1061d7 not found: ID does not exist" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.474217 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" path="/var/lib/kubelet/pods/eda763ad-fc59-4a0e-9cd0-58521b82eb34/volumes" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.684053 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538138-69vxt" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.857438 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95s69\" (UniqueName: \"kubernetes.io/projected/c63ce165-1697-4ed8-9276-6fe97714e195-kube-api-access-95s69\") pod \"c63ce165-1697-4ed8-9276-6fe97714e195\" (UID: \"c63ce165-1697-4ed8-9276-6fe97714e195\") " Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.865176 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63ce165-1697-4ed8-9276-6fe97714e195-kube-api-access-95s69" (OuterVolumeSpecName: "kube-api-access-95s69") pod "c63ce165-1697-4ed8-9276-6fe97714e195" (UID: "c63ce165-1697-4ed8-9276-6fe97714e195"). InnerVolumeSpecName "kube-api-access-95s69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:18:04 crc kubenswrapper[4897]: I0228 14:18:04.960667 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95s69\" (UniqueName: \"kubernetes.io/projected/c63ce165-1697-4ed8-9276-6fe97714e195-kube-api-access-95s69\") on node \"crc\" DevicePath \"\"" Feb 28 14:18:05 crc kubenswrapper[4897]: I0228 14:18:05.251474 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538138-69vxt" Feb 28 14:18:05 crc kubenswrapper[4897]: I0228 14:18:05.251462 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538138-69vxt" event={"ID":"c63ce165-1697-4ed8-9276-6fe97714e195","Type":"ContainerDied","Data":"5cc72605ec6e95bd8179af10c04874175dff306d6c476648b7c6d06c68146c05"} Feb 28 14:18:05 crc kubenswrapper[4897]: I0228 14:18:05.251636 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc72605ec6e95bd8179af10c04874175dff306d6c476648b7c6d06c68146c05" Feb 28 14:18:05 crc kubenswrapper[4897]: I0228 14:18:05.324890 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538132-kwg2w"] Feb 28 14:18:05 crc kubenswrapper[4897]: I0228 14:18:05.336452 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538132-kwg2w"] Feb 28 14:18:06 crc kubenswrapper[4897]: I0228 14:18:06.481559 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c297b7-8688-439d-b935-90faa8af4f55" path="/var/lib/kubelet/pods/31c297b7-8688-439d-b935-90faa8af4f55/volumes" Feb 28 14:18:13 crc kubenswrapper[4897]: I0228 14:18:13.457477 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:18:13 crc kubenswrapper[4897]: E0228 14:18:13.458729 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:18:25 crc kubenswrapper[4897]: I0228 14:18:25.456342 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:18:25 crc kubenswrapper[4897]: E0228 14:18:25.457107 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:18:39 crc kubenswrapper[4897]: I0228 14:18:39.456154 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:18:39 crc kubenswrapper[4897]: E0228 14:18:39.457048 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:18:48 crc kubenswrapper[4897]: I0228 14:18:48.264225 4897 scope.go:117] "RemoveContainer" containerID="d8946e594a54d3af87bd4c7d1abca42856763cf3240b9f89764bbd4ff09a0f70" Feb 28 14:18:54 crc kubenswrapper[4897]: I0228 14:18:54.457514 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:18:54 crc kubenswrapper[4897]: E0228 14:18:54.458760 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:19:05 crc kubenswrapper[4897]: I0228 14:19:05.456470 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:19:05 crc kubenswrapper[4897]: E0228 14:19:05.457502 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:19:20 crc kubenswrapper[4897]: I0228 14:19:20.456659 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:19:20 crc kubenswrapper[4897]: E0228 14:19:20.457856 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.561849 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8q929"] Feb 28 14:19:23 crc kubenswrapper[4897]: E0228 14:19:23.562861 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="extract-utilities" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.562884 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="extract-utilities" Feb 28 14:19:23 crc kubenswrapper[4897]: E0228 14:19:23.562907 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="registry-server" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.562918 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="registry-server" Feb 28 14:19:23 crc kubenswrapper[4897]: E0228 14:19:23.562996 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63ce165-1697-4ed8-9276-6fe97714e195" containerName="oc" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.563010 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63ce165-1697-4ed8-9276-6fe97714e195" containerName="oc" Feb 28 14:19:23 crc kubenswrapper[4897]: E0228 14:19:23.563030 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="extract-content" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.563040 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="extract-content" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.563375 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda763ad-fc59-4a0e-9cd0-58521b82eb34" containerName="registry-server" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.563414 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63ce165-1697-4ed8-9276-6fe97714e195" containerName="oc" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.565929 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.607490 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q929"] Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.609385 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkp88\" (UniqueName: \"kubernetes.io/projected/816cca37-a6bd-411c-8645-bf18e6a86f6f-kube-api-access-pkp88\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.609471 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-utilities\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.609585 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-catalog-content\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.710415 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-utilities\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.710561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-catalog-content\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.710712 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkp88\" (UniqueName: \"kubernetes.io/projected/816cca37-a6bd-411c-8645-bf18e6a86f6f-kube-api-access-pkp88\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.710870 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-utilities\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.711368 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-catalog-content\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.736872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkp88\" (UniqueName: \"kubernetes.io/projected/816cca37-a6bd-411c-8645-bf18e6a86f6f-kube-api-access-pkp88\") pod \"redhat-marketplace-8q929\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:23 crc kubenswrapper[4897]: I0228 14:19:23.896774 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:19:24 crc kubenswrapper[4897]: I0228 14:19:24.455363 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q929"] Feb 28 14:19:25 crc kubenswrapper[4897]: I0228 14:19:25.304956 4897 generic.go:334] "Generic (PLEG): container finished" podID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerID="e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9" exitCode=0 Feb 28 14:19:25 crc kubenswrapper[4897]: I0228 14:19:25.305030 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q929" event={"ID":"816cca37-a6bd-411c-8645-bf18e6a86f6f","Type":"ContainerDied","Data":"e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9"} Feb 28 14:19:25 crc kubenswrapper[4897]: I0228 14:19:25.305791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q929" event={"ID":"816cca37-a6bd-411c-8645-bf18e6a86f6f","Type":"ContainerStarted","Data":"09513a936013ddfcfcda96e7ee589af3e5052a44409bcc68d1c40f5b4cb89e44"} Feb 28 14:19:25 crc kubenswrapper[4897]: E0228 14:19:25.866059 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:19:25 crc kubenswrapper[4897]: E0228 14:19:25.866224 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkp88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8q929_openshift-marketplace(816cca37-a6bd-411c-8645-bf18e6a86f6f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:19:25 crc kubenswrapper[4897]: E0228 14:19:25.867779 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" Feb 28 14:19:26 crc kubenswrapper[4897]: E0228 14:19:26.317856 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" Feb 28 14:19:31 crc kubenswrapper[4897]: I0228 14:19:31.456692 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:19:31 crc kubenswrapper[4897]: E0228 14:19:31.458233 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:19:39 crc kubenswrapper[4897]: E0228 14:19:39.015551 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:19:39 crc kubenswrapper[4897]: E0228 14:19:39.016367 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkp88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8q929_openshift-marketplace(816cca37-a6bd-411c-8645-bf18e6a86f6f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:19:39 crc kubenswrapper[4897]: E0228 14:19:39.017999 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" Feb 28 14:19:45 crc kubenswrapper[4897]: I0228 14:19:45.456930 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:19:45 crc kubenswrapper[4897]: E0228 14:19:45.457673 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:19:49 crc kubenswrapper[4897]: E0228 14:19:49.460532 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" Feb 28 14:19:57 crc kubenswrapper[4897]: I0228 14:19:57.457237 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:19:57 crc kubenswrapper[4897]: E0228 14:19:57.458388 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.180953 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538140-vhh5d"] Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.183235 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538140-vhh5d" Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.187490 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.187670 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.190382 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.204486 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538140-vhh5d"] Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.241100 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkvc\" (UniqueName: \"kubernetes.io/projected/d245b55d-0e84-4cd2-b66b-8fad9627d79e-kube-api-access-fgkvc\") pod \"auto-csr-approver-29538140-vhh5d\" (UID: \"d245b55d-0e84-4cd2-b66b-8fad9627d79e\") " pod="openshift-infra/auto-csr-approver-29538140-vhh5d" Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.342417 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgkvc\" (UniqueName: \"kubernetes.io/projected/d245b55d-0e84-4cd2-b66b-8fad9627d79e-kube-api-access-fgkvc\") pod \"auto-csr-approver-29538140-vhh5d\" (UID: \"d245b55d-0e84-4cd2-b66b-8fad9627d79e\") " pod="openshift-infra/auto-csr-approver-29538140-vhh5d" Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.361628 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgkvc\" (UniqueName: \"kubernetes.io/projected/d245b55d-0e84-4cd2-b66b-8fad9627d79e-kube-api-access-fgkvc\") pod \"auto-csr-approver-29538140-vhh5d\" (UID: \"d245b55d-0e84-4cd2-b66b-8fad9627d79e\") " pod="openshift-infra/auto-csr-approver-29538140-vhh5d" Feb 28 14:20:00 crc kubenswrapper[4897]: I0228 14:20:00.536227 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538140-vhh5d" Feb 28 14:20:01 crc kubenswrapper[4897]: I0228 14:20:01.034032 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538140-vhh5d"] Feb 28 14:20:01 crc kubenswrapper[4897]: I0228 14:20:01.796535 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538140-vhh5d" event={"ID":"d245b55d-0e84-4cd2-b66b-8fad9627d79e","Type":"ContainerStarted","Data":"ce8ea71757ec6162de7dbe566241d019d5259703daeb4895d18cec2226340945"} Feb 28 14:20:02 crc kubenswrapper[4897]: I0228 14:20:02.806248 4897 generic.go:334] "Generic (PLEG): container finished" podID="d245b55d-0e84-4cd2-b66b-8fad9627d79e" containerID="5c2a0943f32fde54c07661f033a00a93988dd8f04e61cc0c26a703364c66d2b9" exitCode=0 Feb 28 14:20:02 crc kubenswrapper[4897]: I0228 14:20:02.806288 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538140-vhh5d" event={"ID":"d245b55d-0e84-4cd2-b66b-8fad9627d79e","Type":"ContainerDied","Data":"5c2a0943f32fde54c07661f033a00a93988dd8f04e61cc0c26a703364c66d2b9"} Feb 28 14:20:04 crc kubenswrapper[4897]: E0228 14:20:04.100475 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:20:04 crc kubenswrapper[4897]: E0228 14:20:04.100872 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkp88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8q929_openshift-marketplace(816cca37-a6bd-411c-8645-bf18e6a86f6f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:20:04 crc kubenswrapper[4897]: E0228 14:20:04.102082 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" Feb 28 14:20:04 crc kubenswrapper[4897]: I0228 14:20:04.265369 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538140-vhh5d" Feb 28 14:20:04 crc kubenswrapper[4897]: I0228 14:20:04.338706 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgkvc\" (UniqueName: \"kubernetes.io/projected/d245b55d-0e84-4cd2-b66b-8fad9627d79e-kube-api-access-fgkvc\") pod \"d245b55d-0e84-4cd2-b66b-8fad9627d79e\" (UID: \"d245b55d-0e84-4cd2-b66b-8fad9627d79e\") " Feb 28 14:20:04 crc kubenswrapper[4897]: I0228 14:20:04.350662 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d245b55d-0e84-4cd2-b66b-8fad9627d79e-kube-api-access-fgkvc" (OuterVolumeSpecName: "kube-api-access-fgkvc") pod "d245b55d-0e84-4cd2-b66b-8fad9627d79e" (UID: "d245b55d-0e84-4cd2-b66b-8fad9627d79e"). InnerVolumeSpecName "kube-api-access-fgkvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:20:04 crc kubenswrapper[4897]: I0228 14:20:04.441814 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgkvc\" (UniqueName: \"kubernetes.io/projected/d245b55d-0e84-4cd2-b66b-8fad9627d79e-kube-api-access-fgkvc\") on node \"crc\" DevicePath \"\"" Feb 28 14:20:04 crc kubenswrapper[4897]: I0228 14:20:04.901454 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538140-vhh5d" event={"ID":"d245b55d-0e84-4cd2-b66b-8fad9627d79e","Type":"ContainerDied","Data":"ce8ea71757ec6162de7dbe566241d019d5259703daeb4895d18cec2226340945"} Feb 28 14:20:04 crc kubenswrapper[4897]: I0228 14:20:04.901497 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8ea71757ec6162de7dbe566241d019d5259703daeb4895d18cec2226340945" Feb 28 14:20:04 crc kubenswrapper[4897]: I0228 14:20:04.901546 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538140-vhh5d" Feb 28 14:20:05 crc kubenswrapper[4897]: I0228 14:20:05.345335 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538134-vrqtw"] Feb 28 14:20:05 crc kubenswrapper[4897]: I0228 14:20:05.354656 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538134-vrqtw"] Feb 28 14:20:06 crc kubenswrapper[4897]: I0228 14:20:06.473484 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e951afc8-3d1f-41d1-8efc-5cb2a7713b89" path="/var/lib/kubelet/pods/e951afc8-3d1f-41d1-8efc-5cb2a7713b89/volumes" Feb 28 14:20:09 crc kubenswrapper[4897]: I0228 14:20:09.456447 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:20:09 crc kubenswrapper[4897]: E0228 14:20:09.457518 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:20:14 crc kubenswrapper[4897]: E0228 14:20:14.462429 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" Feb 28 14:20:23 crc kubenswrapper[4897]: I0228 14:20:23.457289 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:20:23 crc kubenswrapper[4897]: E0228 14:20:23.458515 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:20:26 crc kubenswrapper[4897]: E0228 14:20:26.467651 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" Feb 28 14:20:37 crc kubenswrapper[4897]: I0228 14:20:37.457369 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:20:37 crc kubenswrapper[4897]: E0228 14:20:37.458605 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:20:39 crc kubenswrapper[4897]: E0228 14:20:39.458322 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" Feb 28 14:20:48 crc kubenswrapper[4897]: I0228 14:20:48.411452 4897 scope.go:117] "RemoveContainer" containerID="d8a2b058801c35c0ff4b569521228af8aac3410ed773e221110367fca80ef980" Feb 28 14:20:52 crc kubenswrapper[4897]: I0228 14:20:52.456500 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:20:52 crc kubenswrapper[4897]: E0228 14:20:52.457170 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:20:54 crc kubenswrapper[4897]: I0228 14:20:54.496844 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q929" event={"ID":"816cca37-a6bd-411c-8645-bf18e6a86f6f","Type":"ContainerStarted","Data":"b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268"} Feb 28 14:20:55 crc kubenswrapper[4897]: I0228 14:20:55.510975 4897 generic.go:334] "Generic (PLEG): container finished" podID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerID="b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268" exitCode=0 Feb 28 14:20:55 crc kubenswrapper[4897]: I0228 14:20:55.511035 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q929" event={"ID":"816cca37-a6bd-411c-8645-bf18e6a86f6f","Type":"ContainerDied","Data":"b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268"} Feb 28 14:20:57 crc kubenswrapper[4897]: I0228 14:20:57.571843 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q929" event={"ID":"816cca37-a6bd-411c-8645-bf18e6a86f6f","Type":"ContainerStarted","Data":"073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff"} Feb 28 14:20:57 crc kubenswrapper[4897]: I0228 14:20:57.633622 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8q929" podStartSLOduration=3.951792078 podStartE2EDuration="1m34.633602204s" podCreationTimestamp="2026-02-28 14:19:23 +0000 UTC" firstStartedPulling="2026-02-28 14:19:25.310476725 +0000 UTC m=+3779.552797402" lastFinishedPulling="2026-02-28 14:20:55.992286841 +0000 UTC m=+3870.234607528" observedRunningTime="2026-02-28 14:20:57.606650254 +0000 UTC m=+3871.848970921" watchObservedRunningTime="2026-02-28 14:20:57.633602204 +0000 UTC m=+3871.875922871" Feb 28 14:21:03 crc kubenswrapper[4897]: I0228 14:21:03.456828 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:21:03 crc kubenswrapper[4897]: E0228 14:21:03.457894 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:21:03 crc kubenswrapper[4897]: I0228 14:21:03.896913 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:21:03 crc kubenswrapper[4897]: I0228 14:21:03.897224 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:21:04 crc kubenswrapper[4897]: I0228 14:21:04.024532 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:21:04 crc kubenswrapper[4897]: I0228 14:21:04.724439 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:21:04 crc kubenswrapper[4897]: I0228 14:21:04.790751 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q929"] Feb 28 14:21:06 crc kubenswrapper[4897]: I0228 14:21:06.694441 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8q929" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerName="registry-server" containerID="cri-o://073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff" gracePeriod=2 Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.238052 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.416018 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkp88\" (UniqueName: \"kubernetes.io/projected/816cca37-a6bd-411c-8645-bf18e6a86f6f-kube-api-access-pkp88\") pod \"816cca37-a6bd-411c-8645-bf18e6a86f6f\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.416136 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-catalog-content\") pod \"816cca37-a6bd-411c-8645-bf18e6a86f6f\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.416359 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-utilities\") pod \"816cca37-a6bd-411c-8645-bf18e6a86f6f\" (UID: \"816cca37-a6bd-411c-8645-bf18e6a86f6f\") " Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.417210 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-utilities" (OuterVolumeSpecName: "utilities") pod "816cca37-a6bd-411c-8645-bf18e6a86f6f" (UID: "816cca37-a6bd-411c-8645-bf18e6a86f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.424689 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816cca37-a6bd-411c-8645-bf18e6a86f6f-kube-api-access-pkp88" (OuterVolumeSpecName: "kube-api-access-pkp88") pod "816cca37-a6bd-411c-8645-bf18e6a86f6f" (UID: "816cca37-a6bd-411c-8645-bf18e6a86f6f"). InnerVolumeSpecName "kube-api-access-pkp88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.450143 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "816cca37-a6bd-411c-8645-bf18e6a86f6f" (UID: "816cca37-a6bd-411c-8645-bf18e6a86f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.519695 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.519783 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkp88\" (UniqueName: \"kubernetes.io/projected/816cca37-a6bd-411c-8645-bf18e6a86f6f-kube-api-access-pkp88\") on node \"crc\" DevicePath \"\"" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.519805 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816cca37-a6bd-411c-8645-bf18e6a86f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.707284 4897 generic.go:334] "Generic (PLEG): container finished" podID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerID="073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff" exitCode=0 Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.707360 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q929" event={"ID":"816cca37-a6bd-411c-8645-bf18e6a86f6f","Type":"ContainerDied","Data":"073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff"} Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.707814 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q929" event={"ID":"816cca37-a6bd-411c-8645-bf18e6a86f6f","Type":"ContainerDied","Data":"09513a936013ddfcfcda96e7ee589af3e5052a44409bcc68d1c40f5b4cb89e44"} Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.707852 4897 scope.go:117] "RemoveContainer" containerID="073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.707458 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8q929" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.741741 4897 scope.go:117] "RemoveContainer" containerID="b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.752523 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q929"] Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.774021 4897 scope.go:117] "RemoveContainer" containerID="e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.777415 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q929"] Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.823604 4897 scope.go:117] "RemoveContainer" containerID="073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff" Feb 28 14:21:07 crc kubenswrapper[4897]: E0228 14:21:07.824237 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff\": container with ID starting with 073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff not found: ID does not exist" containerID="073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.824310 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff"} err="failed to get container status \"073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff\": rpc error: code = NotFound desc = could not find container \"073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff\": container with ID starting with 073d2ce339c5b108202e2a908f3c1deaed0ec54d223772d76f62a33d9c85a6ff not found: ID does not exist" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.824375 4897 scope.go:117] "RemoveContainer" containerID="b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268" Feb 28 14:21:07 crc kubenswrapper[4897]: E0228 14:21:07.825019 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268\": container with ID starting with b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268 not found: ID does not exist" containerID="b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.825085 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268"} err="failed to get container status \"b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268\": rpc error: code = NotFound desc = could not find container \"b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268\": container with ID starting with b6b8166ade3992293d44b10f555d57c60e784323d427810b113c1781d9594268 not found: ID does not exist" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.825121 4897 scope.go:117] "RemoveContainer" containerID="e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9" Feb 28 14:21:07 crc kubenswrapper[4897]: E0228 14:21:07.825537 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9\": container with ID starting with e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9 not found: ID does not exist" containerID="e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9" Feb 28 14:21:07 crc kubenswrapper[4897]: I0228 14:21:07.825565 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9"} err="failed to get container status \"e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9\": rpc error: code = NotFound desc = could not find container \"e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9\": container with ID starting with e9fa824979cc4aacdd30f7140ceaac49e738faae1d2b471be1766d5ccdcba9d9 not found: ID does not exist" Feb 28 14:21:08 crc kubenswrapper[4897]: I0228 14:21:08.469951 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" path="/var/lib/kubelet/pods/816cca37-a6bd-411c-8645-bf18e6a86f6f/volumes" Feb 28 14:21:14 crc kubenswrapper[4897]: I0228 14:21:14.456436 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:21:14 crc kubenswrapper[4897]: E0228 14:21:14.457125 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:21:28 crc kubenswrapper[4897]: I0228 14:21:28.457026 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:21:28 crc kubenswrapper[4897]: E0228 14:21:28.458147 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:21:40 crc kubenswrapper[4897]: I0228 14:21:40.457355 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:21:40 crc kubenswrapper[4897]: E0228 14:21:40.458417 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:21:53 crc kubenswrapper[4897]: I0228 14:21:53.457620 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:21:53 crc kubenswrapper[4897]: E0228 14:21:53.458432 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.183527 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538142-gg955"] Feb 28 14:22:00 crc kubenswrapper[4897]: E0228 14:22:00.185138 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerName="extract-content" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.185172 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerName="extract-content" Feb 28 14:22:00 crc kubenswrapper[4897]: E0228 14:22:00.185209 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerName="registry-server" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.185227 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerName="registry-server" Feb 28 14:22:00 crc kubenswrapper[4897]: E0228 14:22:00.185284 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerName="extract-utilities" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.185304 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerName="extract-utilities" Feb 28 14:22:00 crc kubenswrapper[4897]: E0228 14:22:00.185357 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d245b55d-0e84-4cd2-b66b-8fad9627d79e" containerName="oc" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.185373 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d245b55d-0e84-4cd2-b66b-8fad9627d79e" containerName="oc" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.185865 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d245b55d-0e84-4cd2-b66b-8fad9627d79e" containerName="oc" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.185952 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="816cca37-a6bd-411c-8645-bf18e6a86f6f" containerName="registry-server" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.187479 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538142-gg955" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.193760 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.193819 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.194026 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.208748 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538142-gg955"] Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.278817 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h7gd\" (UniqueName: \"kubernetes.io/projected/f7689c70-19a2-422f-8fd8-9f736a27052c-kube-api-access-9h7gd\") pod \"auto-csr-approver-29538142-gg955\" (UID: \"f7689c70-19a2-422f-8fd8-9f736a27052c\") " pod="openshift-infra/auto-csr-approver-29538142-gg955" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.379826 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h7gd\" (UniqueName: \"kubernetes.io/projected/f7689c70-19a2-422f-8fd8-9f736a27052c-kube-api-access-9h7gd\") pod \"auto-csr-approver-29538142-gg955\" (UID: \"f7689c70-19a2-422f-8fd8-9f736a27052c\") " pod="openshift-infra/auto-csr-approver-29538142-gg955" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.404257 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h7gd\" (UniqueName: \"kubernetes.io/projected/f7689c70-19a2-422f-8fd8-9f736a27052c-kube-api-access-9h7gd\") pod \"auto-csr-approver-29538142-gg955\" (UID: \"f7689c70-19a2-422f-8fd8-9f736a27052c\") " pod="openshift-infra/auto-csr-approver-29538142-gg955" Feb 28 14:22:00 crc kubenswrapper[4897]: I0228 14:22:00.526603 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538142-gg955" Feb 28 14:22:01 crc kubenswrapper[4897]: I0228 14:22:01.023748 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538142-gg955"] Feb 28 14:22:01 crc kubenswrapper[4897]: I0228 14:22:01.379227 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538142-gg955" event={"ID":"f7689c70-19a2-422f-8fd8-9f736a27052c","Type":"ContainerStarted","Data":"55a7819bc4e9b164db136f871285695e866b1488f91618fe78739aa6b3f2081a"} Feb 28 14:22:03 crc kubenswrapper[4897]: I0228 14:22:03.401546 4897 generic.go:334] "Generic (PLEG): container finished" podID="f7689c70-19a2-422f-8fd8-9f736a27052c" containerID="6d73678c8b345991074429112aa1c425013f7c2dcf7af4e25c2a6a7ac0156e23" exitCode=0 Feb 28 14:22:03 crc kubenswrapper[4897]: I0228 14:22:03.401757 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538142-gg955" event={"ID":"f7689c70-19a2-422f-8fd8-9f736a27052c","Type":"ContainerDied","Data":"6d73678c8b345991074429112aa1c425013f7c2dcf7af4e25c2a6a7ac0156e23"} Feb 28 14:22:04 crc kubenswrapper[4897]: I0228 14:22:04.886839 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538142-gg955" Feb 28 14:22:04 crc kubenswrapper[4897]: I0228 14:22:04.991601 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h7gd\" (UniqueName: \"kubernetes.io/projected/f7689c70-19a2-422f-8fd8-9f736a27052c-kube-api-access-9h7gd\") pod \"f7689c70-19a2-422f-8fd8-9f736a27052c\" (UID: \"f7689c70-19a2-422f-8fd8-9f736a27052c\") " Feb 28 14:22:05 crc kubenswrapper[4897]: I0228 14:22:05.433637 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538142-gg955" event={"ID":"f7689c70-19a2-422f-8fd8-9f736a27052c","Type":"ContainerDied","Data":"55a7819bc4e9b164db136f871285695e866b1488f91618fe78739aa6b3f2081a"} Feb 28 14:22:05 crc kubenswrapper[4897]: I0228 14:22:05.433674 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a7819bc4e9b164db136f871285695e866b1488f91618fe78739aa6b3f2081a" Feb 28 14:22:05 crc kubenswrapper[4897]: I0228 14:22:05.433755 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538142-gg955" Feb 28 14:22:05 crc kubenswrapper[4897]: I0228 14:22:05.457029 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:22:05 crc kubenswrapper[4897]: E0228 14:22:05.457357 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:22:05 crc kubenswrapper[4897]: I0228 14:22:05.693020 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7689c70-19a2-422f-8fd8-9f736a27052c-kube-api-access-9h7gd" (OuterVolumeSpecName: "kube-api-access-9h7gd") pod "f7689c70-19a2-422f-8fd8-9f736a27052c" (UID: "f7689c70-19a2-422f-8fd8-9f736a27052c"). InnerVolumeSpecName "kube-api-access-9h7gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:22:05 crc kubenswrapper[4897]: I0228 14:22:05.705886 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h7gd\" (UniqueName: \"kubernetes.io/projected/f7689c70-19a2-422f-8fd8-9f736a27052c-kube-api-access-9h7gd\") on node \"crc\" DevicePath \"\"" Feb 28 14:22:05 crc kubenswrapper[4897]: I0228 14:22:05.958727 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538136-lbgb2"] Feb 28 14:22:05 crc kubenswrapper[4897]: I0228 14:22:05.967705 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538136-lbgb2"] Feb 28 14:22:06 crc kubenswrapper[4897]: I0228 14:22:06.477637 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="669b6241-8415-4133-a6a6-382fe77c0aa9" path="/var/lib/kubelet/pods/669b6241-8415-4133-a6a6-382fe77c0aa9/volumes" Feb 28 14:22:20 crc kubenswrapper[4897]: I0228 14:22:20.457152 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:22:20 crc kubenswrapper[4897]: E0228 14:22:20.458480 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:22:29 crc kubenswrapper[4897]: I0228 14:22:29.922016 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dlfnh"] Feb 28 14:22:29 crc kubenswrapper[4897]: E0228 14:22:29.923261 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7689c70-19a2-422f-8fd8-9f736a27052c" containerName="oc" Feb 28 14:22:29 crc kubenswrapper[4897]: I0228 14:22:29.923278 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7689c70-19a2-422f-8fd8-9f736a27052c" containerName="oc" Feb 28 14:22:29 crc kubenswrapper[4897]: I0228 14:22:29.923571 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7689c70-19a2-422f-8fd8-9f736a27052c" containerName="oc" Feb 28 14:22:29 crc kubenswrapper[4897]: I0228 14:22:29.925611 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:29 crc kubenswrapper[4897]: I0228 14:22:29.955853 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dlfnh"] Feb 28 14:22:29 crc kubenswrapper[4897]: I0228 14:22:29.995460 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rsj\" (UniqueName: \"kubernetes.io/projected/d4bae349-c9e5-4b71-a1df-52a49881626e-kube-api-access-h6rsj\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:29 crc kubenswrapper[4897]: I0228 14:22:29.995516 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-utilities\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:29 crc kubenswrapper[4897]: I0228 14:22:29.995535 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-catalog-content\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:30 crc kubenswrapper[4897]: I0228 14:22:30.097514 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rsj\" (UniqueName: \"kubernetes.io/projected/d4bae349-c9e5-4b71-a1df-52a49881626e-kube-api-access-h6rsj\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:30 crc kubenswrapper[4897]: I0228 14:22:30.097579 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-utilities\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:30 crc kubenswrapper[4897]: I0228 14:22:30.097599 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-catalog-content\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:30 crc kubenswrapper[4897]: I0228 14:22:30.098325 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-utilities\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:30 crc kubenswrapper[4897]: I0228 14:22:30.098600 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-catalog-content\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:30 crc kubenswrapper[4897]: I0228 14:22:30.601205 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rsj\" (UniqueName: \"kubernetes.io/projected/d4bae349-c9e5-4b71-a1df-52a49881626e-kube-api-access-h6rsj\") pod \"redhat-operators-dlfnh\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:30 crc kubenswrapper[4897]: I0228 14:22:30.851577 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:31 crc kubenswrapper[4897]: I0228 14:22:31.326543 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dlfnh"] Feb 28 14:22:31 crc kubenswrapper[4897]: I0228 14:22:31.753775 4897 generic.go:334] "Generic (PLEG): container finished" podID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerID="c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675" exitCode=0 Feb 28 14:22:31 crc kubenswrapper[4897]: I0228 14:22:31.753820 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlfnh" event={"ID":"d4bae349-c9e5-4b71-a1df-52a49881626e","Type":"ContainerDied","Data":"c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675"} Feb 28 14:22:31 crc kubenswrapper[4897]: I0228 14:22:31.753846 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlfnh" event={"ID":"d4bae349-c9e5-4b71-a1df-52a49881626e","Type":"ContainerStarted","Data":"bcca3f8cec672e1249c33b4214cadb76e344582c0e783762e97f698e1a27eb79"} Feb 28 14:22:32 crc kubenswrapper[4897]: E0228 14:22:32.434805 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 14:22:32 crc kubenswrapper[4897]: E0228 14:22:32.435865 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6rsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dlfnh_openshift-marketplace(d4bae349-c9e5-4b71-a1df-52a49881626e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:22:32 crc kubenswrapper[4897]: E0228 14:22:32.437279 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-dlfnh" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" Feb 28 14:22:32 crc kubenswrapper[4897]: E0228 14:22:32.768754 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dlfnh" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" Feb 28 14:22:35 crc kubenswrapper[4897]: I0228 14:22:35.456388 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:22:35 crc kubenswrapper[4897]: I0228 14:22:35.798874 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"26e1c5476cdd030eed7e2e4ba0b09eb958879e72a61c74d8632709a40cf9b234"} Feb 28 14:22:46 crc kubenswrapper[4897]: I0228 14:22:46.954119 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlfnh" event={"ID":"d4bae349-c9e5-4b71-a1df-52a49881626e","Type":"ContainerStarted","Data":"5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae"} Feb 28 14:22:48 crc kubenswrapper[4897]: I0228 14:22:48.560076 4897 scope.go:117] "RemoveContainer" containerID="84d9aec33f4421010c9a51e9295a8dca933ee89ea8bc866b34a7ffaafa69cf44" Feb 28 14:22:48 crc kubenswrapper[4897]: I0228 14:22:48.982963 4897 generic.go:334] "Generic (PLEG): container finished" podID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerID="5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae" exitCode=0 Feb 28 14:22:48 crc kubenswrapper[4897]: I0228 14:22:48.983021 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlfnh" event={"ID":"d4bae349-c9e5-4b71-a1df-52a49881626e","Type":"ContainerDied","Data":"5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae"} Feb 28 14:22:48 crc kubenswrapper[4897]: I0228 14:22:48.986933 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:22:49 crc kubenswrapper[4897]: I0228 14:22:49.998689 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlfnh" event={"ID":"d4bae349-c9e5-4b71-a1df-52a49881626e","Type":"ContainerStarted","Data":"84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3"} Feb 28 14:22:50 crc kubenswrapper[4897]: I0228 14:22:50.045018 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dlfnh" podStartSLOduration=3.37938232 podStartE2EDuration="21.044989002s" podCreationTimestamp="2026-02-28 14:22:29 +0000 UTC" firstStartedPulling="2026-02-28 14:22:31.757479796 +0000 UTC m=+3965.999800483" lastFinishedPulling="2026-02-28 14:22:49.423086498 +0000 UTC m=+3983.665407165" observedRunningTime="2026-02-28 14:22:50.028550244 +0000 UTC m=+3984.270870901" watchObservedRunningTime="2026-02-28 14:22:50.044989002 +0000 UTC m=+3984.287309699" Feb 28 14:22:50 crc kubenswrapper[4897]: I0228 14:22:50.889883 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:50 crc kubenswrapper[4897]: I0228 14:22:50.891432 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:22:51 crc kubenswrapper[4897]: I0228 14:22:51.963496 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dlfnh" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="registry-server" probeResult="failure" output=< Feb 28 14:22:51 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:22:51 crc kubenswrapper[4897]: > Feb 28 14:23:01 crc kubenswrapper[4897]: I0228 14:23:01.936183 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dlfnh" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="registry-server" probeResult="failure" output=< Feb 28 14:23:01 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:23:01 crc kubenswrapper[4897]: > Feb 28 14:23:10 crc kubenswrapper[4897]: I0228 14:23:10.936523 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:23:10 crc kubenswrapper[4897]: I0228 14:23:10.999790 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:23:11 crc kubenswrapper[4897]: I0228 14:23:11.178929 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dlfnh"] Feb 28 14:23:12 crc kubenswrapper[4897]: I0228 14:23:12.240538 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dlfnh" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="registry-server" containerID="cri-o://84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3" gracePeriod=2 Feb 28 14:23:12 crc kubenswrapper[4897]: I0228 14:23:12.823853 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:23:12 crc kubenswrapper[4897]: I0228 14:23:12.930652 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-catalog-content\") pod \"d4bae349-c9e5-4b71-a1df-52a49881626e\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " Feb 28 14:23:12 crc kubenswrapper[4897]: I0228 14:23:12.930729 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6rsj\" (UniqueName: \"kubernetes.io/projected/d4bae349-c9e5-4b71-a1df-52a49881626e-kube-api-access-h6rsj\") pod \"d4bae349-c9e5-4b71-a1df-52a49881626e\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " Feb 28 14:23:12 crc kubenswrapper[4897]: I0228 14:23:12.930783 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-utilities\") pod \"d4bae349-c9e5-4b71-a1df-52a49881626e\" (UID: \"d4bae349-c9e5-4b71-a1df-52a49881626e\") " Feb 28 14:23:12 crc kubenswrapper[4897]: I0228 14:23:12.932114 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-utilities" (OuterVolumeSpecName: "utilities") pod "d4bae349-c9e5-4b71-a1df-52a49881626e" (UID: "d4bae349-c9e5-4b71-a1df-52a49881626e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:23:12 crc kubenswrapper[4897]: I0228 14:23:12.990477 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bae349-c9e5-4b71-a1df-52a49881626e-kube-api-access-h6rsj" (OuterVolumeSpecName: "kube-api-access-h6rsj") pod "d4bae349-c9e5-4b71-a1df-52a49881626e" (UID: "d4bae349-c9e5-4b71-a1df-52a49881626e"). InnerVolumeSpecName "kube-api-access-h6rsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.033663 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6rsj\" (UniqueName: \"kubernetes.io/projected/d4bae349-c9e5-4b71-a1df-52a49881626e-kube-api-access-h6rsj\") on node \"crc\" DevicePath \"\"" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.033699 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.064278 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4bae349-c9e5-4b71-a1df-52a49881626e" (UID: "d4bae349-c9e5-4b71-a1df-52a49881626e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.136554 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bae349-c9e5-4b71-a1df-52a49881626e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.253193 4897 generic.go:334] "Generic (PLEG): container finished" podID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerID="84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3" exitCode=0 Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.253249 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlfnh" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.253254 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlfnh" event={"ID":"d4bae349-c9e5-4b71-a1df-52a49881626e","Type":"ContainerDied","Data":"84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3"} Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.253355 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlfnh" event={"ID":"d4bae349-c9e5-4b71-a1df-52a49881626e","Type":"ContainerDied","Data":"bcca3f8cec672e1249c33b4214cadb76e344582c0e783762e97f698e1a27eb79"} Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.253377 4897 scope.go:117] "RemoveContainer" containerID="84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.297555 4897 scope.go:117] "RemoveContainer" containerID="5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.297852 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dlfnh"] Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.315221 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dlfnh"] Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.331542 4897 scope.go:117] "RemoveContainer" containerID="c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.376023 4897 scope.go:117] "RemoveContainer" containerID="84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3" Feb 28 14:23:13 crc kubenswrapper[4897]: E0228 14:23:13.376543 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3\": container with ID starting with 84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3 not found: ID does not exist" containerID="84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.376609 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3"} err="failed to get container status \"84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3\": rpc error: code = NotFound desc = could not find container \"84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3\": container with ID starting with 84db2938b278d49d69b53f67553e9d22de7db649bce9f4f36af6e30a3e645bb3 not found: ID does not exist" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.376639 4897 scope.go:117] "RemoveContainer" containerID="5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae" Feb 28 14:23:13 crc kubenswrapper[4897]: E0228 14:23:13.377158 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae\": container with ID starting with 5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae not found: ID does not exist" containerID="5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.377225 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae"} err="failed to get container status \"5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae\": rpc error: code = NotFound desc = could not find container \"5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae\": container with ID starting with 5f683cc6a3f8b9550afcbecbf48b7fd84e2a65533eb1e3f832f712d82a7897ae not found: ID does not exist" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.377274 4897 scope.go:117] "RemoveContainer" containerID="c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675" Feb 28 14:23:13 crc kubenswrapper[4897]: E0228 14:23:13.377706 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675\": container with ID starting with c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675 not found: ID does not exist" containerID="c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675" Feb 28 14:23:13 crc kubenswrapper[4897]: I0228 14:23:13.377766 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675"} err="failed to get container status \"c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675\": rpc error: code = NotFound desc = could not find container \"c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675\": container with ID starting with c49d57442b8e980c11303f3da16297541576a930ad922868e701eaa62bde7675 not found: ID does not exist" Feb 28 14:23:14 crc kubenswrapper[4897]: I0228 14:23:14.475224 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" path="/var/lib/kubelet/pods/d4bae349-c9e5-4b71-a1df-52a49881626e/volumes" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.176275 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538144-5qbb6"] Feb 28 14:24:00 crc kubenswrapper[4897]: E0228 14:24:00.177537 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="extract-content" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.177559 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="extract-content" Feb 28 14:24:00 crc kubenswrapper[4897]: E0228 14:24:00.177586 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="registry-server" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.177601 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="registry-server" Feb 28 14:24:00 crc kubenswrapper[4897]: E0228 14:24:00.177656 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="extract-utilities" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.177669 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="extract-utilities" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.178063 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bae349-c9e5-4b71-a1df-52a49881626e" containerName="registry-server" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.179296 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.184366 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.184681 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.184739 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.187972 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538144-5qbb6"] Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.302351 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx87g\" (UniqueName: \"kubernetes.io/projected/00ce0843-eb5a-4122-9dbc-2d12a37c310d-kube-api-access-tx87g\") pod \"auto-csr-approver-29538144-5qbb6\" (UID: \"00ce0843-eb5a-4122-9dbc-2d12a37c310d\") " pod="openshift-infra/auto-csr-approver-29538144-5qbb6" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.405146 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx87g\" (UniqueName: \"kubernetes.io/projected/00ce0843-eb5a-4122-9dbc-2d12a37c310d-kube-api-access-tx87g\") pod \"auto-csr-approver-29538144-5qbb6\" (UID: \"00ce0843-eb5a-4122-9dbc-2d12a37c310d\") " pod="openshift-infra/auto-csr-approver-29538144-5qbb6" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.437964 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx87g\" (UniqueName: \"kubernetes.io/projected/00ce0843-eb5a-4122-9dbc-2d12a37c310d-kube-api-access-tx87g\") pod \"auto-csr-approver-29538144-5qbb6\" (UID: \"00ce0843-eb5a-4122-9dbc-2d12a37c310d\") " pod="openshift-infra/auto-csr-approver-29538144-5qbb6" Feb 28 14:24:00 crc kubenswrapper[4897]: I0228 14:24:00.512174 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" Feb 28 14:24:01 crc kubenswrapper[4897]: W0228 14:24:01.032407 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00ce0843_eb5a_4122_9dbc_2d12a37c310d.slice/crio-428f4d498a0395399f7cc8a486b938aca2cb968c1607d97698862c43ef010447 WatchSource:0}: Error finding container 428f4d498a0395399f7cc8a486b938aca2cb968c1607d97698862c43ef010447: Status 404 returned error can't find the container with id 428f4d498a0395399f7cc8a486b938aca2cb968c1607d97698862c43ef010447 Feb 28 14:24:01 crc kubenswrapper[4897]: I0228 14:24:01.034096 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538144-5qbb6"] Feb 28 14:24:01 crc kubenswrapper[4897]: I0228 14:24:01.779718 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" event={"ID":"00ce0843-eb5a-4122-9dbc-2d12a37c310d","Type":"ContainerStarted","Data":"428f4d498a0395399f7cc8a486b938aca2cb968c1607d97698862c43ef010447"} Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.449462 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nsg7b"] Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.451975 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.483933 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsg7b"] Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.567015 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-utilities\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.568267 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-catalog-content\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.568418 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjtdh\" (UniqueName: \"kubernetes.io/projected/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-kube-api-access-vjtdh\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.670967 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-utilities\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.671053 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-catalog-content\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.671116 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjtdh\" (UniqueName: \"kubernetes.io/projected/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-kube-api-access-vjtdh\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.671574 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-utilities\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.671820 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-catalog-content\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.696276 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjtdh\" (UniqueName: \"kubernetes.io/projected/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-kube-api-access-vjtdh\") pod \"community-operators-nsg7b\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.784490 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.789783 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" event={"ID":"00ce0843-eb5a-4122-9dbc-2d12a37c310d","Type":"ContainerStarted","Data":"ec91b3d22db1ca828b9e8de1ce2c0148b92f423ad8a83279d5c23af57ee009ce"} Feb 28 14:24:02 crc kubenswrapper[4897]: I0228 14:24:02.832454 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" podStartSLOduration=1.755983973 podStartE2EDuration="2.832429644s" podCreationTimestamp="2026-02-28 14:24:00 +0000 UTC" firstStartedPulling="2026-02-28 14:24:01.035212014 +0000 UTC m=+4055.277532661" lastFinishedPulling="2026-02-28 14:24:02.111657645 +0000 UTC m=+4056.353978332" observedRunningTime="2026-02-28 14:24:02.816671569 +0000 UTC m=+4057.058992216" watchObservedRunningTime="2026-02-28 14:24:02.832429644 +0000 UTC m=+4057.074750301" Feb 28 14:24:03 crc kubenswrapper[4897]: I0228 14:24:03.336090 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsg7b"] Feb 28 14:24:03 crc kubenswrapper[4897]: W0228 14:24:03.337410 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23879dda_a3fe_4dfb_a372_0bd3e2e08c92.slice/crio-4aad5c82c00dd926ebfce6a2a8317bbbc8a534ab91a2164847149c08401f9a4c WatchSource:0}: Error finding container 4aad5c82c00dd926ebfce6a2a8317bbbc8a534ab91a2164847149c08401f9a4c: Status 404 returned error can't find the container with id 4aad5c82c00dd926ebfce6a2a8317bbbc8a534ab91a2164847149c08401f9a4c Feb 28 14:24:03 crc kubenswrapper[4897]: I0228 14:24:03.801013 4897 generic.go:334] "Generic (PLEG): container finished" podID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerID="b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8" exitCode=0 Feb 28 14:24:03 crc kubenswrapper[4897]: I0228 14:24:03.801102 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsg7b" event={"ID":"23879dda-a3fe-4dfb-a372-0bd3e2e08c92","Type":"ContainerDied","Data":"b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8"} Feb 28 14:24:03 crc kubenswrapper[4897]: I0228 14:24:03.801872 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsg7b" event={"ID":"23879dda-a3fe-4dfb-a372-0bd3e2e08c92","Type":"ContainerStarted","Data":"4aad5c82c00dd926ebfce6a2a8317bbbc8a534ab91a2164847149c08401f9a4c"} Feb 28 14:24:03 crc kubenswrapper[4897]: I0228 14:24:03.804187 4897 generic.go:334] "Generic (PLEG): container finished" podID="00ce0843-eb5a-4122-9dbc-2d12a37c310d" containerID="ec91b3d22db1ca828b9e8de1ce2c0148b92f423ad8a83279d5c23af57ee009ce" exitCode=0 Feb 28 14:24:03 crc kubenswrapper[4897]: I0228 14:24:03.804260 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" event={"ID":"00ce0843-eb5a-4122-9dbc-2d12a37c310d","Type":"ContainerDied","Data":"ec91b3d22db1ca828b9e8de1ce2c0148b92f423ad8a83279d5c23af57ee009ce"} Feb 28 14:24:04 crc kubenswrapper[4897]: E0228 14:24:04.580644 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:24:04 crc kubenswrapper[4897]: E0228 14:24:04.580989 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjtdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nsg7b_openshift-marketplace(23879dda-a3fe-4dfb-a372-0bd3e2e08c92): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:24:04 crc kubenswrapper[4897]: E0228 14:24:04.582330 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" Feb 28 14:24:04 crc kubenswrapper[4897]: E0228 14:24:04.819776 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.438544 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.635768 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx87g\" (UniqueName: \"kubernetes.io/projected/00ce0843-eb5a-4122-9dbc-2d12a37c310d-kube-api-access-tx87g\") pod \"00ce0843-eb5a-4122-9dbc-2d12a37c310d\" (UID: \"00ce0843-eb5a-4122-9dbc-2d12a37c310d\") " Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.645014 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ce0843-eb5a-4122-9dbc-2d12a37c310d-kube-api-access-tx87g" (OuterVolumeSpecName: "kube-api-access-tx87g") pod "00ce0843-eb5a-4122-9dbc-2d12a37c310d" (UID: "00ce0843-eb5a-4122-9dbc-2d12a37c310d"). InnerVolumeSpecName "kube-api-access-tx87g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.738808 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx87g\" (UniqueName: \"kubernetes.io/projected/00ce0843-eb5a-4122-9dbc-2d12a37c310d-kube-api-access-tx87g\") on node \"crc\" DevicePath \"\"" Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.825739 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" event={"ID":"00ce0843-eb5a-4122-9dbc-2d12a37c310d","Type":"ContainerDied","Data":"428f4d498a0395399f7cc8a486b938aca2cb968c1607d97698862c43ef010447"} Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.825786 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428f4d498a0395399f7cc8a486b938aca2cb968c1607d97698862c43ef010447" Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.825871 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538144-5qbb6" Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.931700 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538138-69vxt"] Feb 28 14:24:05 crc kubenswrapper[4897]: I0228 14:24:05.942904 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538138-69vxt"] Feb 28 14:24:06 crc kubenswrapper[4897]: I0228 14:24:06.473667 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63ce165-1697-4ed8-9276-6fe97714e195" path="/var/lib/kubelet/pods/c63ce165-1697-4ed8-9276-6fe97714e195/volumes" Feb 28 14:24:20 crc kubenswrapper[4897]: E0228 14:24:20.024668 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:24:20 crc kubenswrapper[4897]: E0228 14:24:20.025439 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjtdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nsg7b_openshift-marketplace(23879dda-a3fe-4dfb-a372-0bd3e2e08c92): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:24:20 crc kubenswrapper[4897]: E0228 14:24:20.026650 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" Feb 28 14:24:31 crc kubenswrapper[4897]: E0228 14:24:31.460289 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" Feb 28 14:24:43 crc kubenswrapper[4897]: E0228 14:24:43.988341 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:24:43 crc kubenswrapper[4897]: E0228 14:24:43.989094 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjtdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nsg7b_openshift-marketplace(23879dda-a3fe-4dfb-a372-0bd3e2e08c92): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:24:43 crc kubenswrapper[4897]: E0228 14:24:43.990358 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" Feb 28 14:24:48 crc kubenswrapper[4897]: I0228 14:24:48.794086 4897 scope.go:117] "RemoveContainer" containerID="8ee9b95f18cb5befe8acec206724e2c7cf7f98be6fa1db3522a250e86dbe3f0d" Feb 28 14:24:57 crc kubenswrapper[4897]: E0228 14:24:57.460999 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" Feb 28 14:25:03 crc kubenswrapper[4897]: I0228 14:25:03.370965 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:25:03 crc kubenswrapper[4897]: I0228 14:25:03.371749 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:25:08 crc kubenswrapper[4897]: E0228 14:25:08.460708 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" Feb 28 14:25:21 crc kubenswrapper[4897]: E0228 14:25:21.459049 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" Feb 28 14:25:33 crc kubenswrapper[4897]: I0228 14:25:33.371398 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:25:33 crc kubenswrapper[4897]: I0228 14:25:33.371941 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:25:35 crc kubenswrapper[4897]: I0228 14:25:35.926981 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsg7b" event={"ID":"23879dda-a3fe-4dfb-a372-0bd3e2e08c92","Type":"ContainerStarted","Data":"751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d"} Feb 28 14:25:36 crc kubenswrapper[4897]: I0228 14:25:36.941946 4897 generic.go:334] "Generic (PLEG): container finished" podID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerID="751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d" exitCode=0 Feb 28 14:25:36 crc kubenswrapper[4897]: I0228 14:25:36.942074 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsg7b" event={"ID":"23879dda-a3fe-4dfb-a372-0bd3e2e08c92","Type":"ContainerDied","Data":"751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d"} Feb 28 14:25:37 crc kubenswrapper[4897]: I0228 14:25:37.956547 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsg7b" event={"ID":"23879dda-a3fe-4dfb-a372-0bd3e2e08c92","Type":"ContainerStarted","Data":"6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8"} Feb 28 14:25:38 crc kubenswrapper[4897]: I0228 14:25:38.008029 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nsg7b" podStartSLOduration=2.437173834 podStartE2EDuration="1m36.007992981s" podCreationTimestamp="2026-02-28 14:24:02 +0000 UTC" firstStartedPulling="2026-02-28 14:24:03.802627757 +0000 UTC m=+4058.044948424" lastFinishedPulling="2026-02-28 14:25:37.373446904 +0000 UTC m=+4151.615767571" observedRunningTime="2026-02-28 14:25:37.987613836 +0000 UTC m=+4152.229934573" watchObservedRunningTime="2026-02-28 14:25:38.007992981 +0000 UTC m=+4152.250313678" Feb 28 14:25:42 crc kubenswrapper[4897]: I0228 14:25:42.784839 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:25:42 crc kubenswrapper[4897]: I0228 14:25:42.786580 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:25:42 crc kubenswrapper[4897]: I0228 14:25:42.876090 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:25:43 crc kubenswrapper[4897]: I0228 14:25:43.111429 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:25:43 crc kubenswrapper[4897]: I0228 14:25:43.197818 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsg7b"] Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.044802 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nsg7b" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerName="registry-server" containerID="cri-o://6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8" gracePeriod=2 Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.586423 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.757550 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjtdh\" (UniqueName: \"kubernetes.io/projected/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-kube-api-access-vjtdh\") pod \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.758462 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-utilities\") pod \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.758936 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-catalog-content\") pod \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\" (UID: \"23879dda-a3fe-4dfb-a372-0bd3e2e08c92\") " Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.759783 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-utilities" (OuterVolumeSpecName: "utilities") pod "23879dda-a3fe-4dfb-a372-0bd3e2e08c92" (UID: "23879dda-a3fe-4dfb-a372-0bd3e2e08c92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.760824 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.772008 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-kube-api-access-vjtdh" (OuterVolumeSpecName: "kube-api-access-vjtdh") pod "23879dda-a3fe-4dfb-a372-0bd3e2e08c92" (UID: "23879dda-a3fe-4dfb-a372-0bd3e2e08c92"). InnerVolumeSpecName "kube-api-access-vjtdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.836985 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23879dda-a3fe-4dfb-a372-0bd3e2e08c92" (UID: "23879dda-a3fe-4dfb-a372-0bd3e2e08c92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.862903 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:25:45 crc kubenswrapper[4897]: I0228 14:25:45.862933 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjtdh\" (UniqueName: \"kubernetes.io/projected/23879dda-a3fe-4dfb-a372-0bd3e2e08c92-kube-api-access-vjtdh\") on node \"crc\" DevicePath \"\"" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.108737 4897 generic.go:334] "Generic (PLEG): container finished" podID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerID="6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8" exitCode=0 Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.109096 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsg7b" event={"ID":"23879dda-a3fe-4dfb-a372-0bd3e2e08c92","Type":"ContainerDied","Data":"6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8"} Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.109132 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsg7b" event={"ID":"23879dda-a3fe-4dfb-a372-0bd3e2e08c92","Type":"ContainerDied","Data":"4aad5c82c00dd926ebfce6a2a8317bbbc8a534ab91a2164847149c08401f9a4c"} Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.109185 4897 scope.go:117] "RemoveContainer" containerID="6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.109389 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsg7b" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.148044 4897 scope.go:117] "RemoveContainer" containerID="751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.176791 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsg7b"] Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.188756 4897 scope.go:117] "RemoveContainer" containerID="b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.191759 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nsg7b"] Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.262817 4897 scope.go:117] "RemoveContainer" containerID="6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8" Feb 28 14:25:46 crc kubenswrapper[4897]: E0228 14:25:46.263480 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8\": container with ID starting with 6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8 not found: ID does not exist" containerID="6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.263538 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8"} err="failed to get container status \"6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8\": rpc error: code = NotFound desc = could not find container \"6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8\": container with ID starting with 6af8655fbb38e072fefdbcc19255455c8d685b82f31684b18d5f08ab96ddedb8 not found: ID does not exist" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.263575 4897 scope.go:117] "RemoveContainer" containerID="751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d" Feb 28 14:25:46 crc kubenswrapper[4897]: E0228 14:25:46.264197 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d\": container with ID starting with 751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d not found: ID does not exist" containerID="751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.264234 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d"} err="failed to get container status \"751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d\": rpc error: code = NotFound desc = could not find container \"751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d\": container with ID starting with 751b3a2ae9a0984f515e89b1e21c053d154496f97a86b10d539f315965eb013d not found: ID does not exist" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.264261 4897 scope.go:117] "RemoveContainer" containerID="b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8" Feb 28 14:25:46 crc kubenswrapper[4897]: E0228 14:25:46.264669 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8\": container with ID starting with b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8 not found: ID does not exist" containerID="b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.264713 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8"} err="failed to get container status \"b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8\": rpc error: code = NotFound desc = could not find container \"b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8\": container with ID starting with b26d56101984cd72e688c6d952c6186f8dcaab7f86a77ebf0550ee051a2f1ac8 not found: ID does not exist" Feb 28 14:25:46 crc kubenswrapper[4897]: I0228 14:25:46.469505 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" path="/var/lib/kubelet/pods/23879dda-a3fe-4dfb-a372-0bd3e2e08c92/volumes" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.167183 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538146-8lz52"] Feb 28 14:26:00 crc kubenswrapper[4897]: E0228 14:26:00.168982 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerName="registry-server" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.169009 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerName="registry-server" Feb 28 14:26:00 crc kubenswrapper[4897]: E0228 14:26:00.169087 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ce0843-eb5a-4122-9dbc-2d12a37c310d" containerName="oc" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.169102 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ce0843-eb5a-4122-9dbc-2d12a37c310d" containerName="oc" Feb 28 14:26:00 crc kubenswrapper[4897]: E0228 14:26:00.169140 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerName="extract-content" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.169153 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerName="extract-content" Feb 28 14:26:00 crc kubenswrapper[4897]: E0228 14:26:00.169481 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerName="extract-utilities" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.169571 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerName="extract-utilities" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.170840 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="23879dda-a3fe-4dfb-a372-0bd3e2e08c92" containerName="registry-server" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.170926 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ce0843-eb5a-4122-9dbc-2d12a37c310d" containerName="oc" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.173442 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538146-8lz52" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.177504 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.177584 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.179715 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.180778 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538146-8lz52"] Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.254243 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlxw8\" (UniqueName: \"kubernetes.io/projected/6761f206-6d6b-441f-8753-215486d008d9-kube-api-access-mlxw8\") pod \"auto-csr-approver-29538146-8lz52\" (UID: \"6761f206-6d6b-441f-8753-215486d008d9\") " pod="openshift-infra/auto-csr-approver-29538146-8lz52" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.356195 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlxw8\" (UniqueName: \"kubernetes.io/projected/6761f206-6d6b-441f-8753-215486d008d9-kube-api-access-mlxw8\") pod \"auto-csr-approver-29538146-8lz52\" (UID: \"6761f206-6d6b-441f-8753-215486d008d9\") " pod="openshift-infra/auto-csr-approver-29538146-8lz52" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.393200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlxw8\" (UniqueName: \"kubernetes.io/projected/6761f206-6d6b-441f-8753-215486d008d9-kube-api-access-mlxw8\") pod \"auto-csr-approver-29538146-8lz52\" (UID: \"6761f206-6d6b-441f-8753-215486d008d9\") " pod="openshift-infra/auto-csr-approver-29538146-8lz52" Feb 28 14:26:00 crc kubenswrapper[4897]: I0228 14:26:00.502338 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538146-8lz52" Feb 28 14:26:01 crc kubenswrapper[4897]: I0228 14:26:01.035807 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538146-8lz52"] Feb 28 14:26:01 crc kubenswrapper[4897]: I0228 14:26:01.303435 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538146-8lz52" event={"ID":"6761f206-6d6b-441f-8753-215486d008d9","Type":"ContainerStarted","Data":"b0ef8d19ec4b745d46a0ed7c80ee99cc4d997ce65f943f8ac699e86a38ebbe5f"} Feb 28 14:26:03 crc kubenswrapper[4897]: I0228 14:26:03.327741 4897 generic.go:334] "Generic (PLEG): container finished" podID="6761f206-6d6b-441f-8753-215486d008d9" containerID="e60fb8771f94e322adc6a8616bf52a58eed0315e6da2ddd99d4d9713abd4eb0e" exitCode=0 Feb 28 14:26:03 crc kubenswrapper[4897]: I0228 14:26:03.327950 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538146-8lz52" event={"ID":"6761f206-6d6b-441f-8753-215486d008d9","Type":"ContainerDied","Data":"e60fb8771f94e322adc6a8616bf52a58eed0315e6da2ddd99d4d9713abd4eb0e"} Feb 28 14:26:03 crc kubenswrapper[4897]: I0228 14:26:03.370784 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:26:03 crc kubenswrapper[4897]: I0228 14:26:03.370865 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:26:03 crc kubenswrapper[4897]: I0228 14:26:03.370929 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:26:03 crc kubenswrapper[4897]: I0228 14:26:03.372038 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26e1c5476cdd030eed7e2e4ba0b09eb958879e72a61c74d8632709a40cf9b234"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:26:03 crc kubenswrapper[4897]: I0228 14:26:03.372145 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://26e1c5476cdd030eed7e2e4ba0b09eb958879e72a61c74d8632709a40cf9b234" gracePeriod=600 Feb 28 14:26:04 crc kubenswrapper[4897]: I0228 14:26:04.358702 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="26e1c5476cdd030eed7e2e4ba0b09eb958879e72a61c74d8632709a40cf9b234" exitCode=0 Feb 28 14:26:04 crc kubenswrapper[4897]: I0228 14:26:04.358795 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"26e1c5476cdd030eed7e2e4ba0b09eb958879e72a61c74d8632709a40cf9b234"} Feb 28 14:26:04 crc kubenswrapper[4897]: I0228 14:26:04.359199 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89"} Feb 28 14:26:04 crc kubenswrapper[4897]: I0228 14:26:04.359239 4897 scope.go:117] "RemoveContainer" containerID="badfe283117bca71fcd1d917beb3f6d850e13521f8c84a43efd354fba13df106" Feb 28 14:26:04 crc kubenswrapper[4897]: I0228 14:26:04.753257 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538146-8lz52" Feb 28 14:26:04 crc kubenswrapper[4897]: I0228 14:26:04.856774 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlxw8\" (UniqueName: \"kubernetes.io/projected/6761f206-6d6b-441f-8753-215486d008d9-kube-api-access-mlxw8\") pod \"6761f206-6d6b-441f-8753-215486d008d9\" (UID: \"6761f206-6d6b-441f-8753-215486d008d9\") " Feb 28 14:26:04 crc kubenswrapper[4897]: I0228 14:26:04.875988 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6761f206-6d6b-441f-8753-215486d008d9-kube-api-access-mlxw8" (OuterVolumeSpecName: "kube-api-access-mlxw8") pod "6761f206-6d6b-441f-8753-215486d008d9" (UID: "6761f206-6d6b-441f-8753-215486d008d9"). InnerVolumeSpecName "kube-api-access-mlxw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:26:04 crc kubenswrapper[4897]: I0228 14:26:04.960858 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlxw8\" (UniqueName: \"kubernetes.io/projected/6761f206-6d6b-441f-8753-215486d008d9-kube-api-access-mlxw8\") on node \"crc\" DevicePath \"\"" Feb 28 14:26:05 crc kubenswrapper[4897]: I0228 14:26:05.377179 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538146-8lz52" event={"ID":"6761f206-6d6b-441f-8753-215486d008d9","Type":"ContainerDied","Data":"b0ef8d19ec4b745d46a0ed7c80ee99cc4d997ce65f943f8ac699e86a38ebbe5f"} Feb 28 14:26:05 crc kubenswrapper[4897]: I0228 14:26:05.377250 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ef8d19ec4b745d46a0ed7c80ee99cc4d997ce65f943f8ac699e86a38ebbe5f" Feb 28 14:26:05 crc kubenswrapper[4897]: I0228 14:26:05.377331 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538146-8lz52" Feb 28 14:26:05 crc kubenswrapper[4897]: E0228 14:26:05.628003 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6761f206_6d6b_441f_8753_215486d008d9.slice/crio-b0ef8d19ec4b745d46a0ed7c80ee99cc4d997ce65f943f8ac699e86a38ebbe5f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6761f206_6d6b_441f_8753_215486d008d9.slice\": RecentStats: unable to find data in memory cache]" Feb 28 14:26:05 crc kubenswrapper[4897]: I0228 14:26:05.834906 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538140-vhh5d"] Feb 28 14:26:05 crc kubenswrapper[4897]: I0228 14:26:05.843342 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538140-vhh5d"] Feb 28 14:26:06 crc kubenswrapper[4897]: I0228 14:26:06.480770 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d245b55d-0e84-4cd2-b66b-8fad9627d79e" path="/var/lib/kubelet/pods/d245b55d-0e84-4cd2-b66b-8fad9627d79e/volumes" Feb 28 14:26:48 crc kubenswrapper[4897]: I0228 14:26:48.932030 4897 scope.go:117] "RemoveContainer" containerID="5c2a0943f32fde54c07661f033a00a93988dd8f04e61cc0c26a703364c66d2b9" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.711816 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhkjm"] Feb 28 14:27:55 crc kubenswrapper[4897]: E0228 14:27:55.712729 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6761f206-6d6b-441f-8753-215486d008d9" containerName="oc" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.712742 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6761f206-6d6b-441f-8753-215486d008d9" containerName="oc" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.712988 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6761f206-6d6b-441f-8753-215486d008d9" containerName="oc" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.714365 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.743302 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhkjm"] Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.855001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2nhg\" (UniqueName: \"kubernetes.io/projected/a2ba2090-8584-4cfb-954b-2744ea990b7b-kube-api-access-h2nhg\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.855118 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-catalog-content\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.855204 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-utilities\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.957004 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-utilities\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.957107 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2nhg\" (UniqueName: \"kubernetes.io/projected/a2ba2090-8584-4cfb-954b-2744ea990b7b-kube-api-access-h2nhg\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.957194 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-catalog-content\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.957548 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-utilities\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.957601 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-catalog-content\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:55 crc kubenswrapper[4897]: I0228 14:27:55.975625 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2nhg\" (UniqueName: \"kubernetes.io/projected/a2ba2090-8584-4cfb-954b-2744ea990b7b-kube-api-access-h2nhg\") pod \"certified-operators-bhkjm\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:56 crc kubenswrapper[4897]: I0228 14:27:56.048523 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:27:56 crc kubenswrapper[4897]: I0228 14:27:56.600482 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhkjm"] Feb 28 14:27:56 crc kubenswrapper[4897]: I0228 14:27:56.772467 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkjm" event={"ID":"a2ba2090-8584-4cfb-954b-2744ea990b7b","Type":"ContainerStarted","Data":"95584a18054b279ee2acba3917c8954648da693e67d6228328b88c5658974410"} Feb 28 14:27:57 crc kubenswrapper[4897]: I0228 14:27:57.786596 4897 generic.go:334] "Generic (PLEG): container finished" podID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerID="722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc" exitCode=0 Feb 28 14:27:57 crc kubenswrapper[4897]: I0228 14:27:57.786659 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkjm" event={"ID":"a2ba2090-8584-4cfb-954b-2744ea990b7b","Type":"ContainerDied","Data":"722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc"} Feb 28 14:27:57 crc kubenswrapper[4897]: I0228 14:27:57.791925 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:27:59 crc kubenswrapper[4897]: I0228 14:27:59.806977 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkjm" event={"ID":"a2ba2090-8584-4cfb-954b-2744ea990b7b","Type":"ContainerStarted","Data":"9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1"} Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.173945 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538148-h2mhj"] Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.176140 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538148-h2mhj" Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.185600 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538148-h2mhj"] Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.186177 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.186406 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.186529 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.358073 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnzn9\" (UniqueName: \"kubernetes.io/projected/55bac071-ec11-4344-a337-6d8bc24bca6f-kube-api-access-cnzn9\") pod \"auto-csr-approver-29538148-h2mhj\" (UID: \"55bac071-ec11-4344-a337-6d8bc24bca6f\") " pod="openshift-infra/auto-csr-approver-29538148-h2mhj" Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.460136 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnzn9\" (UniqueName: \"kubernetes.io/projected/55bac071-ec11-4344-a337-6d8bc24bca6f-kube-api-access-cnzn9\") pod \"auto-csr-approver-29538148-h2mhj\" (UID: \"55bac071-ec11-4344-a337-6d8bc24bca6f\") " pod="openshift-infra/auto-csr-approver-29538148-h2mhj" Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.504536 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnzn9\" (UniqueName: \"kubernetes.io/projected/55bac071-ec11-4344-a337-6d8bc24bca6f-kube-api-access-cnzn9\") pod \"auto-csr-approver-29538148-h2mhj\" (UID: \"55bac071-ec11-4344-a337-6d8bc24bca6f\") " pod="openshift-infra/auto-csr-approver-29538148-h2mhj" Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.519497 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538148-h2mhj" Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.818157 4897 generic.go:334] "Generic (PLEG): container finished" podID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerID="9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1" exitCode=0 Feb 28 14:28:00 crc kubenswrapper[4897]: I0228 14:28:00.818336 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkjm" event={"ID":"a2ba2090-8584-4cfb-954b-2744ea990b7b","Type":"ContainerDied","Data":"9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1"} Feb 28 14:28:01 crc kubenswrapper[4897]: W0228 14:28:01.011902 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55bac071_ec11_4344_a337_6d8bc24bca6f.slice/crio-701a947e530afb54695107962fb521bab145fe776805d028cf363342d3607393 WatchSource:0}: Error finding container 701a947e530afb54695107962fb521bab145fe776805d028cf363342d3607393: Status 404 returned error can't find the container with id 701a947e530afb54695107962fb521bab145fe776805d028cf363342d3607393 Feb 28 14:28:01 crc kubenswrapper[4897]: I0228 14:28:01.012902 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538148-h2mhj"] Feb 28 14:28:01 crc kubenswrapper[4897]: I0228 14:28:01.832906 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkjm" event={"ID":"a2ba2090-8584-4cfb-954b-2744ea990b7b","Type":"ContainerStarted","Data":"736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296"} Feb 28 14:28:01 crc kubenswrapper[4897]: I0228 14:28:01.834272 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538148-h2mhj" event={"ID":"55bac071-ec11-4344-a337-6d8bc24bca6f","Type":"ContainerStarted","Data":"701a947e530afb54695107962fb521bab145fe776805d028cf363342d3607393"} Feb 28 14:28:01 crc kubenswrapper[4897]: I0228 14:28:01.875852 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhkjm" podStartSLOduration=3.221710291 podStartE2EDuration="6.875819093s" podCreationTimestamp="2026-02-28 14:27:55 +0000 UTC" firstStartedPulling="2026-02-28 14:27:57.791534119 +0000 UTC m=+4292.033854816" lastFinishedPulling="2026-02-28 14:28:01.445642951 +0000 UTC m=+4295.687963618" observedRunningTime="2026-02-28 14:28:01.855357606 +0000 UTC m=+4296.097678303" watchObservedRunningTime="2026-02-28 14:28:01.875819093 +0000 UTC m=+4296.118139781" Feb 28 14:28:02 crc kubenswrapper[4897]: I0228 14:28:02.849543 4897 generic.go:334] "Generic (PLEG): container finished" podID="55bac071-ec11-4344-a337-6d8bc24bca6f" containerID="a099d43d09883868151eb4cf7cd871cc8856c3eca85c05425fc6ca2c72698051" exitCode=0 Feb 28 14:28:02 crc kubenswrapper[4897]: I0228 14:28:02.849660 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538148-h2mhj" event={"ID":"55bac071-ec11-4344-a337-6d8bc24bca6f","Type":"ContainerDied","Data":"a099d43d09883868151eb4cf7cd871cc8856c3eca85c05425fc6ca2c72698051"} Feb 28 14:28:03 crc kubenswrapper[4897]: I0228 14:28:03.371724 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:28:03 crc kubenswrapper[4897]: I0228 14:28:03.372052 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:28:04 crc kubenswrapper[4897]: I0228 14:28:04.293489 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538148-h2mhj" Feb 28 14:28:04 crc kubenswrapper[4897]: I0228 14:28:04.341950 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnzn9\" (UniqueName: \"kubernetes.io/projected/55bac071-ec11-4344-a337-6d8bc24bca6f-kube-api-access-cnzn9\") pod \"55bac071-ec11-4344-a337-6d8bc24bca6f\" (UID: \"55bac071-ec11-4344-a337-6d8bc24bca6f\") " Feb 28 14:28:04 crc kubenswrapper[4897]: I0228 14:28:04.351882 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55bac071-ec11-4344-a337-6d8bc24bca6f-kube-api-access-cnzn9" (OuterVolumeSpecName: "kube-api-access-cnzn9") pod "55bac071-ec11-4344-a337-6d8bc24bca6f" (UID: "55bac071-ec11-4344-a337-6d8bc24bca6f"). InnerVolumeSpecName "kube-api-access-cnzn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:28:04 crc kubenswrapper[4897]: I0228 14:28:04.448821 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnzn9\" (UniqueName: \"kubernetes.io/projected/55bac071-ec11-4344-a337-6d8bc24bca6f-kube-api-access-cnzn9\") on node \"crc\" DevicePath \"\"" Feb 28 14:28:04 crc kubenswrapper[4897]: I0228 14:28:04.875603 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538148-h2mhj" event={"ID":"55bac071-ec11-4344-a337-6d8bc24bca6f","Type":"ContainerDied","Data":"701a947e530afb54695107962fb521bab145fe776805d028cf363342d3607393"} Feb 28 14:28:04 crc kubenswrapper[4897]: I0228 14:28:04.875668 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="701a947e530afb54695107962fb521bab145fe776805d028cf363342d3607393" Feb 28 14:28:04 crc kubenswrapper[4897]: I0228 14:28:04.875758 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538148-h2mhj" Feb 28 14:28:05 crc kubenswrapper[4897]: I0228 14:28:05.385934 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538142-gg955"] Feb 28 14:28:05 crc kubenswrapper[4897]: I0228 14:28:05.396703 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538142-gg955"] Feb 28 14:28:06 crc kubenswrapper[4897]: I0228 14:28:06.049016 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:28:06 crc kubenswrapper[4897]: I0228 14:28:06.049413 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:28:06 crc kubenswrapper[4897]: I0228 14:28:06.169019 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:28:06 crc kubenswrapper[4897]: I0228 14:28:06.477854 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7689c70-19a2-422f-8fd8-9f736a27052c" path="/var/lib/kubelet/pods/f7689c70-19a2-422f-8fd8-9f736a27052c/volumes" Feb 28 14:28:06 crc kubenswrapper[4897]: I0228 14:28:06.991298 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:28:07 crc kubenswrapper[4897]: I0228 14:28:07.061665 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhkjm"] Feb 28 14:28:08 crc kubenswrapper[4897]: I0228 14:28:08.921803 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhkjm" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerName="registry-server" containerID="cri-o://736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296" gracePeriod=2 Feb 28 14:28:09 crc kubenswrapper[4897]: E0228 14:28:09.112134 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2ba2090_8584_4cfb_954b_2744ea990b7b.slice/crio-736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2ba2090_8584_4cfb_954b_2744ea990b7b.slice/crio-conmon-736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296.scope\": RecentStats: unable to find data in memory cache]" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.444582 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.464903 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-catalog-content\") pod \"a2ba2090-8584-4cfb-954b-2744ea990b7b\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.465453 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-utilities\") pod \"a2ba2090-8584-4cfb-954b-2744ea990b7b\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.467060 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-utilities" (OuterVolumeSpecName: "utilities") pod "a2ba2090-8584-4cfb-954b-2744ea990b7b" (UID: "a2ba2090-8584-4cfb-954b-2744ea990b7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.468562 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2nhg\" (UniqueName: \"kubernetes.io/projected/a2ba2090-8584-4cfb-954b-2744ea990b7b-kube-api-access-h2nhg\") pod \"a2ba2090-8584-4cfb-954b-2744ea990b7b\" (UID: \"a2ba2090-8584-4cfb-954b-2744ea990b7b\") " Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.469966 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.486298 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ba2090-8584-4cfb-954b-2744ea990b7b-kube-api-access-h2nhg" (OuterVolumeSpecName: "kube-api-access-h2nhg") pod "a2ba2090-8584-4cfb-954b-2744ea990b7b" (UID: "a2ba2090-8584-4cfb-954b-2744ea990b7b"). InnerVolumeSpecName "kube-api-access-h2nhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.517155 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2ba2090-8584-4cfb-954b-2744ea990b7b" (UID: "a2ba2090-8584-4cfb-954b-2744ea990b7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.572601 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2ba2090-8584-4cfb-954b-2744ea990b7b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.572634 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2nhg\" (UniqueName: \"kubernetes.io/projected/a2ba2090-8584-4cfb-954b-2744ea990b7b-kube-api-access-h2nhg\") on node \"crc\" DevicePath \"\"" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.937748 4897 generic.go:334] "Generic (PLEG): container finished" podID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerID="736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296" exitCode=0 Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.937832 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkjm" event={"ID":"a2ba2090-8584-4cfb-954b-2744ea990b7b","Type":"ContainerDied","Data":"736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296"} Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.937919 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkjm" event={"ID":"a2ba2090-8584-4cfb-954b-2744ea990b7b","Type":"ContainerDied","Data":"95584a18054b279ee2acba3917c8954648da693e67d6228328b88c5658974410"} Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.937953 4897 scope.go:117] "RemoveContainer" containerID="736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.937861 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkjm" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.964291 4897 scope.go:117] "RemoveContainer" containerID="9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1" Feb 28 14:28:09 crc kubenswrapper[4897]: I0228 14:28:09.996295 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhkjm"] Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.005644 4897 scope.go:117] "RemoveContainer" containerID="722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc" Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.012696 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhkjm"] Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.075087 4897 scope.go:117] "RemoveContainer" containerID="736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296" Feb 28 14:28:10 crc kubenswrapper[4897]: E0228 14:28:10.075741 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296\": container with ID starting with 736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296 not found: ID does not exist" containerID="736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296" Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.075802 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296"} err="failed to get container status \"736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296\": rpc error: code = NotFound desc = could not find container \"736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296\": container with ID starting with 736393c4b17d71aba5072028eee2a1722ae2fd1144f66fae5da596ad71471296 not found: ID does not exist" Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.075838 4897 scope.go:117] "RemoveContainer" containerID="9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1" Feb 28 14:28:10 crc kubenswrapper[4897]: E0228 14:28:10.076624 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1\": container with ID starting with 9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1 not found: ID does not exist" containerID="9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1" Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.076691 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1"} err="failed to get container status \"9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1\": rpc error: code = NotFound desc = could not find container \"9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1\": container with ID starting with 9d8d20090bdad17a5e12fae2f8e81cd8eaa2efbd9856fa3202fa287816ef55b1 not found: ID does not exist" Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.076729 4897 scope.go:117] "RemoveContainer" containerID="722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc" Feb 28 14:28:10 crc kubenswrapper[4897]: E0228 14:28:10.077220 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc\": container with ID starting with 722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc not found: ID does not exist" containerID="722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc" Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.077271 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc"} err="failed to get container status \"722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc\": rpc error: code = NotFound desc = could not find container \"722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc\": container with ID starting with 722dd38202ff26bc9c3fac865c5f317c6e509d192d9e89bda9242755f1c37cfc not found: ID does not exist" Feb 28 14:28:10 crc kubenswrapper[4897]: I0228 14:28:10.468875 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" path="/var/lib/kubelet/pods/a2ba2090-8584-4cfb-954b-2744ea990b7b/volumes" Feb 28 14:28:33 crc kubenswrapper[4897]: I0228 14:28:33.371376 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:28:33 crc kubenswrapper[4897]: I0228 14:28:33.372052 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:28:49 crc kubenswrapper[4897]: I0228 14:28:49.063550 4897 scope.go:117] "RemoveContainer" containerID="6d73678c8b345991074429112aa1c425013f7c2dcf7af4e25c2a6a7ac0156e23" Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.370746 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.371445 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.371512 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.372637 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.372745 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" gracePeriod=600 Feb 28 14:29:03 crc kubenswrapper[4897]: E0228 14:29:03.495545 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.575829 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" exitCode=0 Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.575892 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89"} Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.575945 4897 scope.go:117] "RemoveContainer" containerID="26e1c5476cdd030eed7e2e4ba0b09eb958879e72a61c74d8632709a40cf9b234" Feb 28 14:29:03 crc kubenswrapper[4897]: I0228 14:29:03.576808 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:29:03 crc kubenswrapper[4897]: E0228 14:29:03.577274 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:29:14 crc kubenswrapper[4897]: I0228 14:29:14.456439 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:29:14 crc kubenswrapper[4897]: E0228 14:29:14.457080 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:29:27 crc kubenswrapper[4897]: I0228 14:29:27.456550 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:29:27 crc kubenswrapper[4897]: E0228 14:29:27.458063 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:29:41 crc kubenswrapper[4897]: I0228 14:29:41.456602 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:29:41 crc kubenswrapper[4897]: E0228 14:29:41.459702 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:29:54 crc kubenswrapper[4897]: I0228 14:29:54.456046 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:29:54 crc kubenswrapper[4897]: E0228 14:29:54.456828 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.176365 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9"] Feb 28 14:30:00 crc kubenswrapper[4897]: E0228 14:30:00.177222 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bac071-ec11-4344-a337-6d8bc24bca6f" containerName="oc" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.177234 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bac071-ec11-4344-a337-6d8bc24bca6f" containerName="oc" Feb 28 14:30:00 crc kubenswrapper[4897]: E0228 14:30:00.177250 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerName="extract-utilities" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.177256 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerName="extract-utilities" Feb 28 14:30:00 crc kubenswrapper[4897]: E0228 14:30:00.177275 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerName="registry-server" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.177281 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerName="registry-server" Feb 28 14:30:00 crc kubenswrapper[4897]: E0228 14:30:00.177306 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerName="extract-content" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.177326 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerName="extract-content" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.177514 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ba2090-8584-4cfb-954b-2744ea990b7b" containerName="registry-server" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.177526 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="55bac071-ec11-4344-a337-6d8bc24bca6f" containerName="oc" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.178278 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.184295 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9"] Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.184895 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.185685 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.228230 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-secret-volume\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.228504 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj9rb\" (UniqueName: \"kubernetes.io/projected/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-kube-api-access-hj9rb\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.228621 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-config-volume\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.254722 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538150-9rxzk"] Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.256230 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538150-9rxzk" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.258773 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.258927 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.258948 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.268857 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538150-9rxzk"] Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.330989 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2swvq\" (UniqueName: \"kubernetes.io/projected/2b5b0de5-b621-4aac-906d-ee2456262d86-kube-api-access-2swvq\") pod \"auto-csr-approver-29538150-9rxzk\" (UID: \"2b5b0de5-b621-4aac-906d-ee2456262d86\") " pod="openshift-infra/auto-csr-approver-29538150-9rxzk" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.331121 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-secret-volume\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.331160 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj9rb\" (UniqueName: \"kubernetes.io/projected/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-kube-api-access-hj9rb\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.331222 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-config-volume\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.332397 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-config-volume\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.340260 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-secret-volume\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.346173 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj9rb\" (UniqueName: \"kubernetes.io/projected/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-kube-api-access-hj9rb\") pod \"collect-profiles-29538150-ml6s9\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.433892 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2swvq\" (UniqueName: \"kubernetes.io/projected/2b5b0de5-b621-4aac-906d-ee2456262d86-kube-api-access-2swvq\") pod \"auto-csr-approver-29538150-9rxzk\" (UID: \"2b5b0de5-b621-4aac-906d-ee2456262d86\") " pod="openshift-infra/auto-csr-approver-29538150-9rxzk" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.470021 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2swvq\" (UniqueName: \"kubernetes.io/projected/2b5b0de5-b621-4aac-906d-ee2456262d86-kube-api-access-2swvq\") pod \"auto-csr-approver-29538150-9rxzk\" (UID: \"2b5b0de5-b621-4aac-906d-ee2456262d86\") " pod="openshift-infra/auto-csr-approver-29538150-9rxzk" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.494443 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:00 crc kubenswrapper[4897]: I0228 14:30:00.580075 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538150-9rxzk" Feb 28 14:30:01 crc kubenswrapper[4897]: I0228 14:30:01.025765 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9"] Feb 28 14:30:01 crc kubenswrapper[4897]: W0228 14:30:01.030341 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a170135_b3f1_4b3d_b8c7_a9b2dbb1bc1f.slice/crio-69ab5e25d756aa62efcf853b560e7a611b7871e87887cb993b6c8ce7d4e7fe6c WatchSource:0}: Error finding container 69ab5e25d756aa62efcf853b560e7a611b7871e87887cb993b6c8ce7d4e7fe6c: Status 404 returned error can't find the container with id 69ab5e25d756aa62efcf853b560e7a611b7871e87887cb993b6c8ce7d4e7fe6c Feb 28 14:30:01 crc kubenswrapper[4897]: I0228 14:30:01.182490 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538150-9rxzk"] Feb 28 14:30:01 crc kubenswrapper[4897]: W0228 14:30:01.185765 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5b0de5_b621_4aac_906d_ee2456262d86.slice/crio-389b1b3928199eb9335fe05449beda2e656d1f95c847dc9621f830e1e88e68e2 WatchSource:0}: Error finding container 389b1b3928199eb9335fe05449beda2e656d1f95c847dc9621f830e1e88e68e2: Status 404 returned error can't find the container with id 389b1b3928199eb9335fe05449beda2e656d1f95c847dc9621f830e1e88e68e2 Feb 28 14:30:01 crc kubenswrapper[4897]: I0228 14:30:01.258625 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538150-9rxzk" event={"ID":"2b5b0de5-b621-4aac-906d-ee2456262d86","Type":"ContainerStarted","Data":"389b1b3928199eb9335fe05449beda2e656d1f95c847dc9621f830e1e88e68e2"} Feb 28 14:30:01 crc kubenswrapper[4897]: I0228 14:30:01.260276 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" event={"ID":"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f","Type":"ContainerStarted","Data":"38f4bdb07c7b28baae4c3dc760f4eca2943813d2011c5b1a4ad12f966a844742"} Feb 28 14:30:01 crc kubenswrapper[4897]: I0228 14:30:01.260318 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" event={"ID":"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f","Type":"ContainerStarted","Data":"69ab5e25d756aa62efcf853b560e7a611b7871e87887cb993b6c8ce7d4e7fe6c"} Feb 28 14:30:01 crc kubenswrapper[4897]: I0228 14:30:01.281706 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" podStartSLOduration=1.281686077 podStartE2EDuration="1.281686077s" podCreationTimestamp="2026-02-28 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 14:30:01.276086219 +0000 UTC m=+4415.518406896" watchObservedRunningTime="2026-02-28 14:30:01.281686077 +0000 UTC m=+4415.524006744" Feb 28 14:30:02 crc kubenswrapper[4897]: I0228 14:30:02.280665 4897 generic.go:334] "Generic (PLEG): container finished" podID="5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f" containerID="38f4bdb07c7b28baae4c3dc760f4eca2943813d2011c5b1a4ad12f966a844742" exitCode=0 Feb 28 14:30:02 crc kubenswrapper[4897]: I0228 14:30:02.282217 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" event={"ID":"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f","Type":"ContainerDied","Data":"38f4bdb07c7b28baae4c3dc760f4eca2943813d2011c5b1a4ad12f966a844742"} Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.675365 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.718287 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-secret-volume\") pod \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.718403 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj9rb\" (UniqueName: \"kubernetes.io/projected/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-kube-api-access-hj9rb\") pod \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.718438 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-config-volume\") pod \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\" (UID: \"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f\") " Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.719284 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f" (UID: "5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.730485 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f" (UID: "5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.730516 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-kube-api-access-hj9rb" (OuterVolumeSpecName: "kube-api-access-hj9rb") pod "5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f" (UID: "5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f"). InnerVolumeSpecName "kube-api-access-hj9rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.820199 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.820226 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj9rb\" (UniqueName: \"kubernetes.io/projected/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-kube-api-access-hj9rb\") on node \"crc\" DevicePath \"\"" Feb 28 14:30:03 crc kubenswrapper[4897]: I0228 14:30:03.820236 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 14:30:04 crc kubenswrapper[4897]: I0228 14:30:04.306911 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" event={"ID":"5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f","Type":"ContainerDied","Data":"69ab5e25d756aa62efcf853b560e7a611b7871e87887cb993b6c8ce7d4e7fe6c"} Feb 28 14:30:04 crc kubenswrapper[4897]: I0228 14:30:04.306964 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69ab5e25d756aa62efcf853b560e7a611b7871e87887cb993b6c8ce7d4e7fe6c" Feb 28 14:30:04 crc kubenswrapper[4897]: I0228 14:30:04.307008 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538150-ml6s9" Feb 28 14:30:04 crc kubenswrapper[4897]: I0228 14:30:04.400051 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc"] Feb 28 14:30:04 crc kubenswrapper[4897]: I0228 14:30:04.407921 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538105-hrdzc"] Feb 28 14:30:04 crc kubenswrapper[4897]: I0228 14:30:04.470045 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8caf2334-e6eb-4ddd-a189-8fc52e0d07b7" path="/var/lib/kubelet/pods/8caf2334-e6eb-4ddd-a189-8fc52e0d07b7/volumes" Feb 28 14:30:06 crc kubenswrapper[4897]: I0228 14:30:06.472149 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:30:06 crc kubenswrapper[4897]: E0228 14:30:06.473164 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:30:19 crc kubenswrapper[4897]: I0228 14:30:19.457158 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:30:19 crc kubenswrapper[4897]: E0228 14:30:19.458222 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:30:33 crc kubenswrapper[4897]: I0228 14:30:33.456703 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:30:33 crc kubenswrapper[4897]: E0228 14:30:33.458010 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.067626 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l72jv"] Feb 28 14:30:39 crc kubenswrapper[4897]: E0228 14:30:39.068442 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f" containerName="collect-profiles" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.068454 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f" containerName="collect-profiles" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.068658 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a170135-b3f1-4b3d-b8c7-a9b2dbb1bc1f" containerName="collect-profiles" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.072284 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.086630 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l72jv"] Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.211156 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-catalog-content\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.211537 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-utilities\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.211595 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9hkz\" (UniqueName: \"kubernetes.io/projected/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-kube-api-access-r9hkz\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.319502 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9hkz\" (UniqueName: \"kubernetes.io/projected/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-kube-api-access-r9hkz\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.319931 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-catalog-content\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.320181 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-utilities\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.320864 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-utilities\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.321639 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-catalog-content\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.354152 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9hkz\" (UniqueName: \"kubernetes.io/projected/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-kube-api-access-r9hkz\") pod \"redhat-marketplace-l72jv\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.412079 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:39 crc kubenswrapper[4897]: I0228 14:30:39.862337 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l72jv"] Feb 28 14:30:40 crc kubenswrapper[4897]: I0228 14:30:40.744163 4897 generic.go:334] "Generic (PLEG): container finished" podID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerID="547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450" exitCode=0 Feb 28 14:30:40 crc kubenswrapper[4897]: I0228 14:30:40.744274 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l72jv" event={"ID":"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0","Type":"ContainerDied","Data":"547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450"} Feb 28 14:30:40 crc kubenswrapper[4897]: I0228 14:30:40.744632 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l72jv" event={"ID":"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0","Type":"ContainerStarted","Data":"478b81ff406f84fb6df6cfcb6d5f0ce874da9d4c5671f5ca12b7d531872d4a53"} Feb 28 14:30:41 crc kubenswrapper[4897]: I0228 14:30:41.757992 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l72jv" event={"ID":"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0","Type":"ContainerStarted","Data":"29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a"} Feb 28 14:30:42 crc kubenswrapper[4897]: I0228 14:30:42.767821 4897 generic.go:334] "Generic (PLEG): container finished" podID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerID="29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a" exitCode=0 Feb 28 14:30:42 crc kubenswrapper[4897]: I0228 14:30:42.767872 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l72jv" event={"ID":"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0","Type":"ContainerDied","Data":"29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a"} Feb 28 14:30:43 crc kubenswrapper[4897]: I0228 14:30:43.781531 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l72jv" event={"ID":"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0","Type":"ContainerStarted","Data":"e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842"} Feb 28 14:30:43 crc kubenswrapper[4897]: I0228 14:30:43.817193 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l72jv" podStartSLOduration=2.283605577 podStartE2EDuration="4.817167864s" podCreationTimestamp="2026-02-28 14:30:39 +0000 UTC" firstStartedPulling="2026-02-28 14:30:40.746713924 +0000 UTC m=+4454.989034581" lastFinishedPulling="2026-02-28 14:30:43.280276211 +0000 UTC m=+4457.522596868" observedRunningTime="2026-02-28 14:30:43.80140359 +0000 UTC m=+4458.043724287" watchObservedRunningTime="2026-02-28 14:30:43.817167864 +0000 UTC m=+4458.059488541" Feb 28 14:30:44 crc kubenswrapper[4897]: I0228 14:30:44.456987 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:30:44 crc kubenswrapper[4897]: E0228 14:30:44.457384 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:30:49 crc kubenswrapper[4897]: I0228 14:30:49.177239 4897 scope.go:117] "RemoveContainer" containerID="f90e4d828e4817ba4cb45d75eb902c51503dd8eb932a81cad24635662a4fd9c6" Feb 28 14:30:49 crc kubenswrapper[4897]: I0228 14:30:49.412881 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:49 crc kubenswrapper[4897]: I0228 14:30:49.412947 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:49 crc kubenswrapper[4897]: I0228 14:30:49.472484 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:50 crc kubenswrapper[4897]: I0228 14:30:50.559858 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:50 crc kubenswrapper[4897]: I0228 14:30:50.615503 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l72jv"] Feb 28 14:30:51 crc kubenswrapper[4897]: I0228 14:30:51.885239 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l72jv" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerName="registry-server" containerID="cri-o://e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842" gracePeriod=2 Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.456366 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.459699 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-catalog-content\") pod \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.459789 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9hkz\" (UniqueName: \"kubernetes.io/projected/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-kube-api-access-r9hkz\") pod \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.459867 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-utilities\") pod \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\" (UID: \"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0\") " Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.461172 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-utilities" (OuterVolumeSpecName: "utilities") pod "333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" (UID: "333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.467244 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-kube-api-access-r9hkz" (OuterVolumeSpecName: "kube-api-access-r9hkz") pod "333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" (UID: "333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0"). InnerVolumeSpecName "kube-api-access-r9hkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.502932 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" (UID: "333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.562009 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.562062 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9hkz\" (UniqueName: \"kubernetes.io/projected/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-kube-api-access-r9hkz\") on node \"crc\" DevicePath \"\"" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.562072 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.904353 4897 generic.go:334] "Generic (PLEG): container finished" podID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerID="e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842" exitCode=0 Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.904427 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l72jv" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.904448 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l72jv" event={"ID":"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0","Type":"ContainerDied","Data":"e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842"} Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.906923 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l72jv" event={"ID":"333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0","Type":"ContainerDied","Data":"478b81ff406f84fb6df6cfcb6d5f0ce874da9d4c5671f5ca12b7d531872d4a53"} Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.906990 4897 scope.go:117] "RemoveContainer" containerID="e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.939831 4897 scope.go:117] "RemoveContainer" containerID="29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a" Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.960392 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l72jv"] Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.968722 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l72jv"] Feb 28 14:30:52 crc kubenswrapper[4897]: I0228 14:30:52.982819 4897 scope.go:117] "RemoveContainer" containerID="547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450" Feb 28 14:30:53 crc kubenswrapper[4897]: I0228 14:30:53.031089 4897 scope.go:117] "RemoveContainer" containerID="e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842" Feb 28 14:30:53 crc kubenswrapper[4897]: E0228 14:30:53.037117 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842\": container with ID starting with e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842 not found: ID does not exist" containerID="e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842" Feb 28 14:30:53 crc kubenswrapper[4897]: I0228 14:30:53.037253 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842"} err="failed to get container status \"e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842\": rpc error: code = NotFound desc = could not find container \"e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842\": container with ID starting with e7f717f7a3eccca2e897690a17a7a96755da6a86ba96d77f0cc7e166f0e37842 not found: ID does not exist" Feb 28 14:30:53 crc kubenswrapper[4897]: I0228 14:30:53.037353 4897 scope.go:117] "RemoveContainer" containerID="29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a" Feb 28 14:30:53 crc kubenswrapper[4897]: E0228 14:30:53.039057 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a\": container with ID starting with 29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a not found: ID does not exist" containerID="29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a" Feb 28 14:30:53 crc kubenswrapper[4897]: I0228 14:30:53.039103 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a"} err="failed to get container status \"29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a\": rpc error: code = NotFound desc = could not find container \"29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a\": container with ID starting with 29bb5d79700a4e82121435551640ad3d0b754de678e903f3f514bcb8569aee6a not found: ID does not exist" Feb 28 14:30:53 crc kubenswrapper[4897]: I0228 14:30:53.039129 4897 scope.go:117] "RemoveContainer" containerID="547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450" Feb 28 14:30:53 crc kubenswrapper[4897]: E0228 14:30:53.039783 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450\": container with ID starting with 547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450 not found: ID does not exist" containerID="547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450" Feb 28 14:30:53 crc kubenswrapper[4897]: I0228 14:30:53.039877 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450"} err="failed to get container status \"547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450\": rpc error: code = NotFound desc = could not find container \"547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450\": container with ID starting with 547c7d40af7ddde204f38f20cd0d6108af711ba2bead7dd52939f13927ed4450 not found: ID does not exist" Feb 28 14:30:54 crc kubenswrapper[4897]: I0228 14:30:54.491795 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" path="/var/lib/kubelet/pods/333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0/volumes" Feb 28 14:30:55 crc kubenswrapper[4897]: I0228 14:30:55.456677 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:30:55 crc kubenswrapper[4897]: E0228 14:30:55.457161 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:31:02 crc kubenswrapper[4897]: I0228 14:31:02.018053 4897 generic.go:334] "Generic (PLEG): container finished" podID="2b5b0de5-b621-4aac-906d-ee2456262d86" containerID="1032d96ba745f0245ac1167d1d65bf9cbafc3a2bd0be6172209f05591c09b5b5" exitCode=0 Feb 28 14:31:02 crc kubenswrapper[4897]: I0228 14:31:02.018145 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538150-9rxzk" event={"ID":"2b5b0de5-b621-4aac-906d-ee2456262d86","Type":"ContainerDied","Data":"1032d96ba745f0245ac1167d1d65bf9cbafc3a2bd0be6172209f05591c09b5b5"} Feb 28 14:31:03 crc kubenswrapper[4897]: I0228 14:31:03.632650 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538150-9rxzk" Feb 28 14:31:03 crc kubenswrapper[4897]: I0228 14:31:03.739559 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2swvq\" (UniqueName: \"kubernetes.io/projected/2b5b0de5-b621-4aac-906d-ee2456262d86-kube-api-access-2swvq\") pod \"2b5b0de5-b621-4aac-906d-ee2456262d86\" (UID: \"2b5b0de5-b621-4aac-906d-ee2456262d86\") " Feb 28 14:31:03 crc kubenswrapper[4897]: I0228 14:31:03.748778 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5b0de5-b621-4aac-906d-ee2456262d86-kube-api-access-2swvq" (OuterVolumeSpecName: "kube-api-access-2swvq") pod "2b5b0de5-b621-4aac-906d-ee2456262d86" (UID: "2b5b0de5-b621-4aac-906d-ee2456262d86"). InnerVolumeSpecName "kube-api-access-2swvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:31:03 crc kubenswrapper[4897]: I0228 14:31:03.842141 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2swvq\" (UniqueName: \"kubernetes.io/projected/2b5b0de5-b621-4aac-906d-ee2456262d86-kube-api-access-2swvq\") on node \"crc\" DevicePath \"\"" Feb 28 14:31:04 crc kubenswrapper[4897]: I0228 14:31:04.044168 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538150-9rxzk" event={"ID":"2b5b0de5-b621-4aac-906d-ee2456262d86","Type":"ContainerDied","Data":"389b1b3928199eb9335fe05449beda2e656d1f95c847dc9621f830e1e88e68e2"} Feb 28 14:31:04 crc kubenswrapper[4897]: I0228 14:31:04.044223 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="389b1b3928199eb9335fe05449beda2e656d1f95c847dc9621f830e1e88e68e2" Feb 28 14:31:04 crc kubenswrapper[4897]: I0228 14:31:04.044301 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538150-9rxzk" Feb 28 14:31:04 crc kubenswrapper[4897]: E0228 14:31:04.080681 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5b0de5_b621_4aac_906d_ee2456262d86.slice/crio-389b1b3928199eb9335fe05449beda2e656d1f95c847dc9621f830e1e88e68e2\": RecentStats: unable to find data in memory cache]" Feb 28 14:31:04 crc kubenswrapper[4897]: I0228 14:31:04.739712 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538144-5qbb6"] Feb 28 14:31:04 crc kubenswrapper[4897]: I0228 14:31:04.751773 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538144-5qbb6"] Feb 28 14:31:06 crc kubenswrapper[4897]: I0228 14:31:06.476024 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ce0843-eb5a-4122-9dbc-2d12a37c310d" path="/var/lib/kubelet/pods/00ce0843-eb5a-4122-9dbc-2d12a37c310d/volumes" Feb 28 14:31:08 crc kubenswrapper[4897]: I0228 14:31:08.457025 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:31:08 crc kubenswrapper[4897]: E0228 14:31:08.457812 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:31:22 crc kubenswrapper[4897]: I0228 14:31:22.457093 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:31:22 crc kubenswrapper[4897]: E0228 14:31:22.458020 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:31:36 crc kubenswrapper[4897]: I0228 14:31:36.469587 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:31:36 crc kubenswrapper[4897]: E0228 14:31:36.470663 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:31:49 crc kubenswrapper[4897]: I0228 14:31:49.238517 4897 scope.go:117] "RemoveContainer" containerID="ec91b3d22db1ca828b9e8de1ce2c0148b92f423ad8a83279d5c23af57ee009ce" Feb 28 14:31:49 crc kubenswrapper[4897]: I0228 14:31:49.456646 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:31:49 crc kubenswrapper[4897]: E0228 14:31:49.456903 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.166256 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538152-nhj6s"] Feb 28 14:32:00 crc kubenswrapper[4897]: E0228 14:32:00.167455 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerName="extract-content" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.167479 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerName="extract-content" Feb 28 14:32:00 crc kubenswrapper[4897]: E0228 14:32:00.167508 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerName="registry-server" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.167520 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerName="registry-server" Feb 28 14:32:00 crc kubenswrapper[4897]: E0228 14:32:00.167547 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerName="extract-utilities" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.167561 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerName="extract-utilities" Feb 28 14:32:00 crc kubenswrapper[4897]: E0228 14:32:00.167578 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5b0de5-b621-4aac-906d-ee2456262d86" containerName="oc" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.167589 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5b0de5-b621-4aac-906d-ee2456262d86" containerName="oc" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.167929 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5b0de5-b621-4aac-906d-ee2456262d86" containerName="oc" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.167955 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="333ceb74-8be9-4e79-bf55-ac9e2e0a4cc0" containerName="registry-server" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.169346 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.175977 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.176209 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.176347 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.205550 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538152-nhj6s"] Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.289294 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcglm\" (UniqueName: \"kubernetes.io/projected/90dbd68b-a86b-4d5d-8abf-f2a8de88cde8-kube-api-access-dcglm\") pod \"auto-csr-approver-29538152-nhj6s\" (UID: \"90dbd68b-a86b-4d5d-8abf-f2a8de88cde8\") " pod="openshift-infra/auto-csr-approver-29538152-nhj6s" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.391556 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcglm\" (UniqueName: \"kubernetes.io/projected/90dbd68b-a86b-4d5d-8abf-f2a8de88cde8-kube-api-access-dcglm\") pod \"auto-csr-approver-29538152-nhj6s\" (UID: \"90dbd68b-a86b-4d5d-8abf-f2a8de88cde8\") " pod="openshift-infra/auto-csr-approver-29538152-nhj6s" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.492557 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcglm\" (UniqueName: \"kubernetes.io/projected/90dbd68b-a86b-4d5d-8abf-f2a8de88cde8-kube-api-access-dcglm\") pod \"auto-csr-approver-29538152-nhj6s\" (UID: \"90dbd68b-a86b-4d5d-8abf-f2a8de88cde8\") " pod="openshift-infra/auto-csr-approver-29538152-nhj6s" Feb 28 14:32:00 crc kubenswrapper[4897]: I0228 14:32:00.503804 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" Feb 28 14:32:01 crc kubenswrapper[4897]: I0228 14:32:01.034725 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538152-nhj6s"] Feb 28 14:32:01 crc kubenswrapper[4897]: I0228 14:32:01.784131 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" event={"ID":"90dbd68b-a86b-4d5d-8abf-f2a8de88cde8","Type":"ContainerStarted","Data":"9ce84506ebd1223f10c04bd8cf7b0a3436efbd908007c4c4e1e5148735eb3173"} Feb 28 14:32:01 crc kubenswrapper[4897]: E0228 14:32:01.814829 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:32:01 crc kubenswrapper[4897]: E0228 14:32:01.815021 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:32:01 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:32:01 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dcglm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538152-nhj6s_openshift-infra(90dbd68b-a86b-4d5d-8abf-f2a8de88cde8): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:32:01 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:32:01 crc kubenswrapper[4897]: E0228 14:32:01.816296 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" podUID="90dbd68b-a86b-4d5d-8abf-f2a8de88cde8" Feb 28 14:32:02 crc kubenswrapper[4897]: E0228 14:32:02.799075 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" podUID="90dbd68b-a86b-4d5d-8abf-f2a8de88cde8" Feb 28 14:32:03 crc kubenswrapper[4897]: I0228 14:32:03.457366 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:32:03 crc kubenswrapper[4897]: E0228 14:32:03.458098 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:32:16 crc kubenswrapper[4897]: I0228 14:32:16.956549 4897 generic.go:334] "Generic (PLEG): container finished" podID="90dbd68b-a86b-4d5d-8abf-f2a8de88cde8" containerID="0acd8a3d2d3cf70df06a8315077113c2b85784b5e25bc0f65ca22ea56950ff84" exitCode=0 Feb 28 14:32:16 crc kubenswrapper[4897]: I0228 14:32:16.956667 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" event={"ID":"90dbd68b-a86b-4d5d-8abf-f2a8de88cde8","Type":"ContainerDied","Data":"0acd8a3d2d3cf70df06a8315077113c2b85784b5e25bc0f65ca22ea56950ff84"} Feb 28 14:32:18 crc kubenswrapper[4897]: I0228 14:32:18.419663 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" Feb 28 14:32:18 crc kubenswrapper[4897]: I0228 14:32:18.456920 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:32:18 crc kubenswrapper[4897]: E0228 14:32:18.457541 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:32:18 crc kubenswrapper[4897]: I0228 14:32:18.562518 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcglm\" (UniqueName: \"kubernetes.io/projected/90dbd68b-a86b-4d5d-8abf-f2a8de88cde8-kube-api-access-dcglm\") pod \"90dbd68b-a86b-4d5d-8abf-f2a8de88cde8\" (UID: \"90dbd68b-a86b-4d5d-8abf-f2a8de88cde8\") " Feb 28 14:32:18 crc kubenswrapper[4897]: I0228 14:32:18.571739 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90dbd68b-a86b-4d5d-8abf-f2a8de88cde8-kube-api-access-dcglm" (OuterVolumeSpecName: "kube-api-access-dcglm") pod "90dbd68b-a86b-4d5d-8abf-f2a8de88cde8" (UID: "90dbd68b-a86b-4d5d-8abf-f2a8de88cde8"). InnerVolumeSpecName "kube-api-access-dcglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:32:18 crc kubenswrapper[4897]: I0228 14:32:18.666166 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcglm\" (UniqueName: \"kubernetes.io/projected/90dbd68b-a86b-4d5d-8abf-f2a8de88cde8-kube-api-access-dcglm\") on node \"crc\" DevicePath \"\"" Feb 28 14:32:18 crc kubenswrapper[4897]: I0228 14:32:18.988998 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" event={"ID":"90dbd68b-a86b-4d5d-8abf-f2a8de88cde8","Type":"ContainerDied","Data":"9ce84506ebd1223f10c04bd8cf7b0a3436efbd908007c4c4e1e5148735eb3173"} Feb 28 14:32:18 crc kubenswrapper[4897]: I0228 14:32:18.989047 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ce84506ebd1223f10c04bd8cf7b0a3436efbd908007c4c4e1e5148735eb3173" Feb 28 14:32:18 crc kubenswrapper[4897]: I0228 14:32:18.989092 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538152-nhj6s" Feb 28 14:32:19 crc kubenswrapper[4897]: I0228 14:32:19.502291 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538146-8lz52"] Feb 28 14:32:19 crc kubenswrapper[4897]: I0228 14:32:19.512089 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538146-8lz52"] Feb 28 14:32:20 crc kubenswrapper[4897]: I0228 14:32:20.492159 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6761f206-6d6b-441f-8753-215486d008d9" path="/var/lib/kubelet/pods/6761f206-6d6b-441f-8753-215486d008d9/volumes" Feb 28 14:32:31 crc kubenswrapper[4897]: I0228 14:32:31.456974 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:32:31 crc kubenswrapper[4897]: E0228 14:32:31.458044 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:32:43 crc kubenswrapper[4897]: I0228 14:32:43.456959 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:32:43 crc kubenswrapper[4897]: E0228 14:32:43.463603 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:32:49 crc kubenswrapper[4897]: I0228 14:32:49.350776 4897 scope.go:117] "RemoveContainer" containerID="e60fb8771f94e322adc6a8616bf52a58eed0315e6da2ddd99d4d9713abd4eb0e" Feb 28 14:32:55 crc kubenswrapper[4897]: I0228 14:32:55.456641 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:32:55 crc kubenswrapper[4897]: E0228 14:32:55.457587 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:33:09 crc kubenswrapper[4897]: I0228 14:33:09.458361 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:33:09 crc kubenswrapper[4897]: E0228 14:33:09.459291 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.508489 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xmdk4"] Feb 28 14:33:11 crc kubenswrapper[4897]: E0228 14:33:11.509617 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90dbd68b-a86b-4d5d-8abf-f2a8de88cde8" containerName="oc" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.509641 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="90dbd68b-a86b-4d5d-8abf-f2a8de88cde8" containerName="oc" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.510200 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="90dbd68b-a86b-4d5d-8abf-f2a8de88cde8" containerName="oc" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.513058 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.555960 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xmdk4"] Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.715078 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-utilities\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.715222 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78qp\" (UniqueName: \"kubernetes.io/projected/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-kube-api-access-h78qp\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.715849 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-catalog-content\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.817721 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-catalog-content\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.817816 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-utilities\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.817946 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78qp\" (UniqueName: \"kubernetes.io/projected/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-kube-api-access-h78qp\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.818224 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-catalog-content\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.818591 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-utilities\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:11 crc kubenswrapper[4897]: I0228 14:33:11.848716 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78qp\" (UniqueName: \"kubernetes.io/projected/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-kube-api-access-h78qp\") pod \"redhat-operators-xmdk4\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:12 crc kubenswrapper[4897]: I0228 14:33:12.143671 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:12 crc kubenswrapper[4897]: I0228 14:33:12.665264 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xmdk4"] Feb 28 14:33:13 crc kubenswrapper[4897]: I0228 14:33:13.617497 4897 generic.go:334] "Generic (PLEG): container finished" podID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerID="56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3" exitCode=0 Feb 28 14:33:13 crc kubenswrapper[4897]: I0228 14:33:13.617878 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmdk4" event={"ID":"688306b5-875c-46b1-8ee6-7b14b8dc6f9b","Type":"ContainerDied","Data":"56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3"} Feb 28 14:33:13 crc kubenswrapper[4897]: I0228 14:33:13.617919 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmdk4" event={"ID":"688306b5-875c-46b1-8ee6-7b14b8dc6f9b","Type":"ContainerStarted","Data":"a40a4edf20978a9a8acf1a24845f4dc72fccd5007be45c8847541f174c734d13"} Feb 28 14:33:13 crc kubenswrapper[4897]: I0228 14:33:13.620759 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:33:14 crc kubenswrapper[4897]: I0228 14:33:14.632116 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmdk4" event={"ID":"688306b5-875c-46b1-8ee6-7b14b8dc6f9b","Type":"ContainerStarted","Data":"06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1"} Feb 28 14:33:20 crc kubenswrapper[4897]: I0228 14:33:20.703105 4897 generic.go:334] "Generic (PLEG): container finished" podID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerID="06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1" exitCode=0 Feb 28 14:33:20 crc kubenswrapper[4897]: I0228 14:33:20.703235 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmdk4" event={"ID":"688306b5-875c-46b1-8ee6-7b14b8dc6f9b","Type":"ContainerDied","Data":"06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1"} Feb 28 14:33:21 crc kubenswrapper[4897]: I0228 14:33:21.456851 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:33:21 crc kubenswrapper[4897]: E0228 14:33:21.457573 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:33:21 crc kubenswrapper[4897]: I0228 14:33:21.722637 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmdk4" event={"ID":"688306b5-875c-46b1-8ee6-7b14b8dc6f9b","Type":"ContainerStarted","Data":"603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb"} Feb 28 14:33:21 crc kubenswrapper[4897]: I0228 14:33:21.742821 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xmdk4" podStartSLOduration=3.21464188 podStartE2EDuration="10.742804788s" podCreationTimestamp="2026-02-28 14:33:11 +0000 UTC" firstStartedPulling="2026-02-28 14:33:13.620465022 +0000 UTC m=+4607.862785679" lastFinishedPulling="2026-02-28 14:33:21.14862793 +0000 UTC m=+4615.390948587" observedRunningTime="2026-02-28 14:33:21.738174958 +0000 UTC m=+4615.980495615" watchObservedRunningTime="2026-02-28 14:33:21.742804788 +0000 UTC m=+4615.985125445" Feb 28 14:33:22 crc kubenswrapper[4897]: I0228 14:33:22.144035 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:22 crc kubenswrapper[4897]: I0228 14:33:22.144096 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:23 crc kubenswrapper[4897]: I0228 14:33:23.209871 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xmdk4" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="registry-server" probeResult="failure" output=< Feb 28 14:33:23 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:33:23 crc kubenswrapper[4897]: > Feb 28 14:33:33 crc kubenswrapper[4897]: I0228 14:33:33.201270 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xmdk4" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="registry-server" probeResult="failure" output=< Feb 28 14:33:33 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:33:33 crc kubenswrapper[4897]: > Feb 28 14:33:34 crc kubenswrapper[4897]: I0228 14:33:34.456780 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:33:34 crc kubenswrapper[4897]: E0228 14:33:34.457482 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:33:43 crc kubenswrapper[4897]: I0228 14:33:43.211900 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xmdk4" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="registry-server" probeResult="failure" output=< Feb 28 14:33:43 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:33:43 crc kubenswrapper[4897]: > Feb 28 14:33:48 crc kubenswrapper[4897]: I0228 14:33:48.456101 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:33:48 crc kubenswrapper[4897]: E0228 14:33:48.456942 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:33:52 crc kubenswrapper[4897]: I0228 14:33:52.224453 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:52 crc kubenswrapper[4897]: I0228 14:33:52.301301 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:52 crc kubenswrapper[4897]: I0228 14:33:52.503261 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xmdk4"] Feb 28 14:33:54 crc kubenswrapper[4897]: I0228 14:33:54.092109 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xmdk4" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="registry-server" containerID="cri-o://603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb" gracePeriod=2 Feb 28 14:33:54 crc kubenswrapper[4897]: I0228 14:33:54.815056 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:54 crc kubenswrapper[4897]: I0228 14:33:54.901181 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-utilities\") pod \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " Feb 28 14:33:54 crc kubenswrapper[4897]: I0228 14:33:54.901410 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-catalog-content\") pod \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " Feb 28 14:33:54 crc kubenswrapper[4897]: I0228 14:33:54.901461 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h78qp\" (UniqueName: \"kubernetes.io/projected/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-kube-api-access-h78qp\") pod \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\" (UID: \"688306b5-875c-46b1-8ee6-7b14b8dc6f9b\") " Feb 28 14:33:54 crc kubenswrapper[4897]: I0228 14:33:54.903546 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-utilities" (OuterVolumeSpecName: "utilities") pod "688306b5-875c-46b1-8ee6-7b14b8dc6f9b" (UID: "688306b5-875c-46b1-8ee6-7b14b8dc6f9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:33:54 crc kubenswrapper[4897]: I0228 14:33:54.907987 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-kube-api-access-h78qp" (OuterVolumeSpecName: "kube-api-access-h78qp") pod "688306b5-875c-46b1-8ee6-7b14b8dc6f9b" (UID: "688306b5-875c-46b1-8ee6-7b14b8dc6f9b"). InnerVolumeSpecName "kube-api-access-h78qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.004352 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h78qp\" (UniqueName: \"kubernetes.io/projected/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-kube-api-access-h78qp\") on node \"crc\" DevicePath \"\"" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.004508 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.033590 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "688306b5-875c-46b1-8ee6-7b14b8dc6f9b" (UID: "688306b5-875c-46b1-8ee6-7b14b8dc6f9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.103972 4897 generic.go:334] "Generic (PLEG): container finished" podID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerID="603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb" exitCode=0 Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.104019 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmdk4" event={"ID":"688306b5-875c-46b1-8ee6-7b14b8dc6f9b","Type":"ContainerDied","Data":"603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb"} Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.104058 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmdk4" event={"ID":"688306b5-875c-46b1-8ee6-7b14b8dc6f9b","Type":"ContainerDied","Data":"a40a4edf20978a9a8acf1a24845f4dc72fccd5007be45c8847541f174c734d13"} Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.104083 4897 scope.go:117] "RemoveContainer" containerID="603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.104085 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmdk4" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.105913 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/688306b5-875c-46b1-8ee6-7b14b8dc6f9b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.124581 4897 scope.go:117] "RemoveContainer" containerID="06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.153813 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xmdk4"] Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.162069 4897 scope.go:117] "RemoveContainer" containerID="56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.167628 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xmdk4"] Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.218912 4897 scope.go:117] "RemoveContainer" containerID="603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb" Feb 28 14:33:55 crc kubenswrapper[4897]: E0228 14:33:55.219282 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb\": container with ID starting with 603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb not found: ID does not exist" containerID="603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.219340 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb"} err="failed to get container status \"603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb\": rpc error: code = NotFound desc = could not find container \"603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb\": container with ID starting with 603bc70f8ae8e695e675f2ddcf189d90cc7e5bdf0fff9fc9c191929fc97851bb not found: ID does not exist" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.219367 4897 scope.go:117] "RemoveContainer" containerID="06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1" Feb 28 14:33:55 crc kubenswrapper[4897]: E0228 14:33:55.219833 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1\": container with ID starting with 06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1 not found: ID does not exist" containerID="06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.219860 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1"} err="failed to get container status \"06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1\": rpc error: code = NotFound desc = could not find container \"06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1\": container with ID starting with 06f66e1f097943a13efc2dd13126f543c274c748f5b488591b62e5bdb597b9e1 not found: ID does not exist" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.219874 4897 scope.go:117] "RemoveContainer" containerID="56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3" Feb 28 14:33:55 crc kubenswrapper[4897]: E0228 14:33:55.220055 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3\": container with ID starting with 56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3 not found: ID does not exist" containerID="56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3" Feb 28 14:33:55 crc kubenswrapper[4897]: I0228 14:33:55.220081 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3"} err="failed to get container status \"56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3\": rpc error: code = NotFound desc = could not find container \"56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3\": container with ID starting with 56f872a7ad77866239081546252b68ad71a36ef10d546e388a76c6f0a8f5a8a3 not found: ID does not exist" Feb 28 14:33:56 crc kubenswrapper[4897]: I0228 14:33:56.469592 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" path="/var/lib/kubelet/pods/688306b5-875c-46b1-8ee6-7b14b8dc6f9b/volumes" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.162678 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538154-flq5h"] Feb 28 14:34:00 crc kubenswrapper[4897]: E0228 14:34:00.177734 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="extract-utilities" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.178171 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="extract-utilities" Feb 28 14:34:00 crc kubenswrapper[4897]: E0228 14:34:00.178258 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="registry-server" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.178358 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="registry-server" Feb 28 14:34:00 crc kubenswrapper[4897]: E0228 14:34:00.178450 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="extract-content" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.178514 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="extract-content" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.178836 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="688306b5-875c-46b1-8ee6-7b14b8dc6f9b" containerName="registry-server" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.179791 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538154-flq5h" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.189731 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.192262 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538154-flq5h"] Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.195050 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.195102 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.335877 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnrf\" (UniqueName: \"kubernetes.io/projected/bc826503-5c33-4f4f-90a4-dbf79bc0f893-kube-api-access-kgnrf\") pod \"auto-csr-approver-29538154-flq5h\" (UID: \"bc826503-5c33-4f4f-90a4-dbf79bc0f893\") " pod="openshift-infra/auto-csr-approver-29538154-flq5h" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.438139 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgnrf\" (UniqueName: \"kubernetes.io/projected/bc826503-5c33-4f4f-90a4-dbf79bc0f893-kube-api-access-kgnrf\") pod \"auto-csr-approver-29538154-flq5h\" (UID: \"bc826503-5c33-4f4f-90a4-dbf79bc0f893\") " pod="openshift-infra/auto-csr-approver-29538154-flq5h" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.477989 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgnrf\" (UniqueName: \"kubernetes.io/projected/bc826503-5c33-4f4f-90a4-dbf79bc0f893-kube-api-access-kgnrf\") pod \"auto-csr-approver-29538154-flq5h\" (UID: \"bc826503-5c33-4f4f-90a4-dbf79bc0f893\") " pod="openshift-infra/auto-csr-approver-29538154-flq5h" Feb 28 14:34:00 crc kubenswrapper[4897]: I0228 14:34:00.505908 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538154-flq5h" Feb 28 14:34:01 crc kubenswrapper[4897]: I0228 14:34:01.083230 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538154-flq5h"] Feb 28 14:34:01 crc kubenswrapper[4897]: I0228 14:34:01.174973 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538154-flq5h" event={"ID":"bc826503-5c33-4f4f-90a4-dbf79bc0f893","Type":"ContainerStarted","Data":"a9a1b0910d11d2f45b2eabb1e1fa347491d642e1ce69f037616993128414935e"} Feb 28 14:34:02 crc kubenswrapper[4897]: I0228 14:34:02.187700 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538154-flq5h" event={"ID":"bc826503-5c33-4f4f-90a4-dbf79bc0f893","Type":"ContainerStarted","Data":"b2204929c5b135e1ab83393022bb15967fdcdd41e3f6f6eec84d5cadd7f0dd19"} Feb 28 14:34:02 crc kubenswrapper[4897]: I0228 14:34:02.201609 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538154-flq5h" podStartSLOduration=1.421442901 podStartE2EDuration="2.201594884s" podCreationTimestamp="2026-02-28 14:34:00 +0000 UTC" firstStartedPulling="2026-02-28 14:34:01.08954058 +0000 UTC m=+4655.331861247" lastFinishedPulling="2026-02-28 14:34:01.869692563 +0000 UTC m=+4656.112013230" observedRunningTime="2026-02-28 14:34:02.199251388 +0000 UTC m=+4656.441572035" watchObservedRunningTime="2026-02-28 14:34:02.201594884 +0000 UTC m=+4656.443915541" Feb 28 14:34:02 crc kubenswrapper[4897]: I0228 14:34:02.456596 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:34:02 crc kubenswrapper[4897]: E0228 14:34:02.456796 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:34:03 crc kubenswrapper[4897]: I0228 14:34:03.208957 4897 generic.go:334] "Generic (PLEG): container finished" podID="bc826503-5c33-4f4f-90a4-dbf79bc0f893" containerID="b2204929c5b135e1ab83393022bb15967fdcdd41e3f6f6eec84d5cadd7f0dd19" exitCode=0 Feb 28 14:34:03 crc kubenswrapper[4897]: I0228 14:34:03.209160 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538154-flq5h" event={"ID":"bc826503-5c33-4f4f-90a4-dbf79bc0f893","Type":"ContainerDied","Data":"b2204929c5b135e1ab83393022bb15967fdcdd41e3f6f6eec84d5cadd7f0dd19"} Feb 28 14:34:04 crc kubenswrapper[4897]: I0228 14:34:04.713337 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538154-flq5h" Feb 28 14:34:04 crc kubenswrapper[4897]: I0228 14:34:04.840810 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgnrf\" (UniqueName: \"kubernetes.io/projected/bc826503-5c33-4f4f-90a4-dbf79bc0f893-kube-api-access-kgnrf\") pod \"bc826503-5c33-4f4f-90a4-dbf79bc0f893\" (UID: \"bc826503-5c33-4f4f-90a4-dbf79bc0f893\") " Feb 28 14:34:04 crc kubenswrapper[4897]: I0228 14:34:04.847488 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc826503-5c33-4f4f-90a4-dbf79bc0f893-kube-api-access-kgnrf" (OuterVolumeSpecName: "kube-api-access-kgnrf") pod "bc826503-5c33-4f4f-90a4-dbf79bc0f893" (UID: "bc826503-5c33-4f4f-90a4-dbf79bc0f893"). InnerVolumeSpecName "kube-api-access-kgnrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:34:04 crc kubenswrapper[4897]: I0228 14:34:04.943583 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgnrf\" (UniqueName: \"kubernetes.io/projected/bc826503-5c33-4f4f-90a4-dbf79bc0f893-kube-api-access-kgnrf\") on node \"crc\" DevicePath \"\"" Feb 28 14:34:05 crc kubenswrapper[4897]: I0228 14:34:05.240605 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538154-flq5h" event={"ID":"bc826503-5c33-4f4f-90a4-dbf79bc0f893","Type":"ContainerDied","Data":"a9a1b0910d11d2f45b2eabb1e1fa347491d642e1ce69f037616993128414935e"} Feb 28 14:34:05 crc kubenswrapper[4897]: I0228 14:34:05.240655 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9a1b0910d11d2f45b2eabb1e1fa347491d642e1ce69f037616993128414935e" Feb 28 14:34:05 crc kubenswrapper[4897]: I0228 14:34:05.240720 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538154-flq5h" Feb 28 14:34:05 crc kubenswrapper[4897]: I0228 14:34:05.309480 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538148-h2mhj"] Feb 28 14:34:05 crc kubenswrapper[4897]: I0228 14:34:05.322180 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538148-h2mhj"] Feb 28 14:34:06 crc kubenswrapper[4897]: I0228 14:34:06.470682 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55bac071-ec11-4344-a337-6d8bc24bca6f" path="/var/lib/kubelet/pods/55bac071-ec11-4344-a337-6d8bc24bca6f/volumes" Feb 28 14:34:16 crc kubenswrapper[4897]: I0228 14:34:16.469881 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:34:17 crc kubenswrapper[4897]: I0228 14:34:17.377570 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"e9cdd3f2d51f367992d3757825c9f8875fdc1e548ec99ae88e80183b05259b62"} Feb 28 14:34:49 crc kubenswrapper[4897]: I0228 14:34:49.484550 4897 scope.go:117] "RemoveContainer" containerID="a099d43d09883868151eb4cf7cd871cc8856c3eca85c05425fc6ca2c72698051" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.054460 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpp7f"] Feb 28 14:35:30 crc kubenswrapper[4897]: E0228 14:35:30.056814 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc826503-5c33-4f4f-90a4-dbf79bc0f893" containerName="oc" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.056960 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc826503-5c33-4f4f-90a4-dbf79bc0f893" containerName="oc" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.057357 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc826503-5c33-4f4f-90a4-dbf79bc0f893" containerName="oc" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.059228 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.084430 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpp7f"] Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.190669 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-catalog-content\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.190735 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-utilities\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.190894 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwn7w\" (UniqueName: \"kubernetes.io/projected/60e234d3-1d0d-4dfa-bee0-a57efaacd170-kube-api-access-gwn7w\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.292558 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-catalog-content\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.292626 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-utilities\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.292693 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwn7w\" (UniqueName: \"kubernetes.io/projected/60e234d3-1d0d-4dfa-bee0-a57efaacd170-kube-api-access-gwn7w\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.293105 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-catalog-content\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.293351 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-utilities\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.316839 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwn7w\" (UniqueName: \"kubernetes.io/projected/60e234d3-1d0d-4dfa-bee0-a57efaacd170-kube-api-access-gwn7w\") pod \"community-operators-bpp7f\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:30 crc kubenswrapper[4897]: I0228 14:35:30.386836 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:31 crc kubenswrapper[4897]: I0228 14:35:30.964664 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpp7f"] Feb 28 14:35:31 crc kubenswrapper[4897]: I0228 14:35:31.434195 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpp7f" event={"ID":"60e234d3-1d0d-4dfa-bee0-a57efaacd170","Type":"ContainerStarted","Data":"49591e025443044144626079672cc6b99e9836ee28012cd11a8244bd76102d9a"} Feb 28 14:35:32 crc kubenswrapper[4897]: I0228 14:35:32.448120 4897 generic.go:334] "Generic (PLEG): container finished" podID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerID="131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8" exitCode=0 Feb 28 14:35:32 crc kubenswrapper[4897]: I0228 14:35:32.448170 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpp7f" event={"ID":"60e234d3-1d0d-4dfa-bee0-a57efaacd170","Type":"ContainerDied","Data":"131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8"} Feb 28 14:35:34 crc kubenswrapper[4897]: I0228 14:35:34.480626 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpp7f" event={"ID":"60e234d3-1d0d-4dfa-bee0-a57efaacd170","Type":"ContainerStarted","Data":"6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8"} Feb 28 14:35:35 crc kubenswrapper[4897]: I0228 14:35:35.502288 4897 generic.go:334] "Generic (PLEG): container finished" podID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerID="6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8" exitCode=0 Feb 28 14:35:35 crc kubenswrapper[4897]: I0228 14:35:35.502397 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpp7f" event={"ID":"60e234d3-1d0d-4dfa-bee0-a57efaacd170","Type":"ContainerDied","Data":"6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8"} Feb 28 14:35:36 crc kubenswrapper[4897]: I0228 14:35:36.522581 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpp7f" event={"ID":"60e234d3-1d0d-4dfa-bee0-a57efaacd170","Type":"ContainerStarted","Data":"c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64"} Feb 28 14:35:36 crc kubenswrapper[4897]: I0228 14:35:36.544948 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpp7f" podStartSLOduration=3.018949973 podStartE2EDuration="6.544928221s" podCreationTimestamp="2026-02-28 14:35:30 +0000 UTC" firstStartedPulling="2026-02-28 14:35:32.450881601 +0000 UTC m=+4746.693202288" lastFinishedPulling="2026-02-28 14:35:35.976859869 +0000 UTC m=+4750.219180536" observedRunningTime="2026-02-28 14:35:36.540820676 +0000 UTC m=+4750.783141353" watchObservedRunningTime="2026-02-28 14:35:36.544928221 +0000 UTC m=+4750.787248878" Feb 28 14:35:40 crc kubenswrapper[4897]: I0228 14:35:40.387254 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:40 crc kubenswrapper[4897]: I0228 14:35:40.388457 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:40 crc kubenswrapper[4897]: I0228 14:35:40.470188 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:50 crc kubenswrapper[4897]: I0228 14:35:50.484478 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:50 crc kubenswrapper[4897]: I0228 14:35:50.561533 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpp7f"] Feb 28 14:35:50 crc kubenswrapper[4897]: I0228 14:35:50.720454 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpp7f" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerName="registry-server" containerID="cri-o://c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64" gracePeriod=2 Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.238885 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.314108 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-catalog-content\") pod \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.314300 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwn7w\" (UniqueName: \"kubernetes.io/projected/60e234d3-1d0d-4dfa-bee0-a57efaacd170-kube-api-access-gwn7w\") pod \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.314593 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-utilities\") pod \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\" (UID: \"60e234d3-1d0d-4dfa-bee0-a57efaacd170\") " Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.315771 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-utilities" (OuterVolumeSpecName: "utilities") pod "60e234d3-1d0d-4dfa-bee0-a57efaacd170" (UID: "60e234d3-1d0d-4dfa-bee0-a57efaacd170"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.329869 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e234d3-1d0d-4dfa-bee0-a57efaacd170-kube-api-access-gwn7w" (OuterVolumeSpecName: "kube-api-access-gwn7w") pod "60e234d3-1d0d-4dfa-bee0-a57efaacd170" (UID: "60e234d3-1d0d-4dfa-bee0-a57efaacd170"). InnerVolumeSpecName "kube-api-access-gwn7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.367358 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60e234d3-1d0d-4dfa-bee0-a57efaacd170" (UID: "60e234d3-1d0d-4dfa-bee0-a57efaacd170"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.418843 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwn7w\" (UniqueName: \"kubernetes.io/projected/60e234d3-1d0d-4dfa-bee0-a57efaacd170-kube-api-access-gwn7w\") on node \"crc\" DevicePath \"\"" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.418909 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.418941 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e234d3-1d0d-4dfa-bee0-a57efaacd170-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.731034 4897 generic.go:334] "Generic (PLEG): container finished" podID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerID="c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64" exitCode=0 Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.731126 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpp7f" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.731149 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpp7f" event={"ID":"60e234d3-1d0d-4dfa-bee0-a57efaacd170","Type":"ContainerDied","Data":"c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64"} Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.732276 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpp7f" event={"ID":"60e234d3-1d0d-4dfa-bee0-a57efaacd170","Type":"ContainerDied","Data":"49591e025443044144626079672cc6b99e9836ee28012cd11a8244bd76102d9a"} Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.732300 4897 scope.go:117] "RemoveContainer" containerID="c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.769177 4897 scope.go:117] "RemoveContainer" containerID="6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.773198 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpp7f"] Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.781635 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpp7f"] Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.791205 4897 scope.go:117] "RemoveContainer" containerID="131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.840049 4897 scope.go:117] "RemoveContainer" containerID="c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64" Feb 28 14:35:51 crc kubenswrapper[4897]: E0228 14:35:51.840565 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64\": container with ID starting with c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64 not found: ID does not exist" containerID="c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.840617 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64"} err="failed to get container status \"c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64\": rpc error: code = NotFound desc = could not find container \"c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64\": container with ID starting with c072ab9ae4d40072c2a0cd84c31d7f172fc0901d21219a5ae6abda7e4aed4e64 not found: ID does not exist" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.840651 4897 scope.go:117] "RemoveContainer" containerID="6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8" Feb 28 14:35:51 crc kubenswrapper[4897]: E0228 14:35:51.841084 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8\": container with ID starting with 6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8 not found: ID does not exist" containerID="6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.841125 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8"} err="failed to get container status \"6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8\": rpc error: code = NotFound desc = could not find container \"6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8\": container with ID starting with 6d7628643bf5782c5560a7e0c425d27c01d0942995a127e1a04fb61bf72a7ac8 not found: ID does not exist" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.841152 4897 scope.go:117] "RemoveContainer" containerID="131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8" Feb 28 14:35:51 crc kubenswrapper[4897]: E0228 14:35:51.841488 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8\": container with ID starting with 131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8 not found: ID does not exist" containerID="131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8" Feb 28 14:35:51 crc kubenswrapper[4897]: I0228 14:35:51.841523 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8"} err="failed to get container status \"131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8\": rpc error: code = NotFound desc = could not find container \"131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8\": container with ID starting with 131d6de586cd9c016712f918ec8927cffa0bbd8bb0d8ae0087870d44c2858df8 not found: ID does not exist" Feb 28 14:35:51 crc kubenswrapper[4897]: E0228 14:35:51.986985 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60e234d3_1d0d_4dfa_bee0_a57efaacd170.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60e234d3_1d0d_4dfa_bee0_a57efaacd170.slice/crio-49591e025443044144626079672cc6b99e9836ee28012cd11a8244bd76102d9a\": RecentStats: unable to find data in memory cache]" Feb 28 14:35:52 crc kubenswrapper[4897]: I0228 14:35:52.492304 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" path="/var/lib/kubelet/pods/60e234d3-1d0d-4dfa-bee0-a57efaacd170/volumes" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.171453 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538156-kplhd"] Feb 28 14:36:00 crc kubenswrapper[4897]: E0228 14:36:00.172720 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerName="extract-content" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.172744 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerName="extract-content" Feb 28 14:36:00 crc kubenswrapper[4897]: E0228 14:36:00.172788 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerName="extract-utilities" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.172801 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerName="extract-utilities" Feb 28 14:36:00 crc kubenswrapper[4897]: E0228 14:36:00.172819 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerName="registry-server" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.172831 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerName="registry-server" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.173208 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e234d3-1d0d-4dfa-bee0-a57efaacd170" containerName="registry-server" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.174384 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538156-kplhd" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.177479 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.178142 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.182452 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538156-kplhd"] Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.184775 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.231488 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmgq6\" (UniqueName: \"kubernetes.io/projected/1b89291e-dab3-42b0-b9b9-8e0d24258cc7-kube-api-access-zmgq6\") pod \"auto-csr-approver-29538156-kplhd\" (UID: \"1b89291e-dab3-42b0-b9b9-8e0d24258cc7\") " pod="openshift-infra/auto-csr-approver-29538156-kplhd" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.333959 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmgq6\" (UniqueName: \"kubernetes.io/projected/1b89291e-dab3-42b0-b9b9-8e0d24258cc7-kube-api-access-zmgq6\") pod \"auto-csr-approver-29538156-kplhd\" (UID: \"1b89291e-dab3-42b0-b9b9-8e0d24258cc7\") " pod="openshift-infra/auto-csr-approver-29538156-kplhd" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.390376 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmgq6\" (UniqueName: \"kubernetes.io/projected/1b89291e-dab3-42b0-b9b9-8e0d24258cc7-kube-api-access-zmgq6\") pod \"auto-csr-approver-29538156-kplhd\" (UID: \"1b89291e-dab3-42b0-b9b9-8e0d24258cc7\") " pod="openshift-infra/auto-csr-approver-29538156-kplhd" Feb 28 14:36:00 crc kubenswrapper[4897]: I0228 14:36:00.498164 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538156-kplhd" Feb 28 14:36:01 crc kubenswrapper[4897]: I0228 14:36:00.999986 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538156-kplhd"] Feb 28 14:36:01 crc kubenswrapper[4897]: I0228 14:36:01.866558 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538156-kplhd" event={"ID":"1b89291e-dab3-42b0-b9b9-8e0d24258cc7","Type":"ContainerStarted","Data":"3fde5f695d9b60fd7015ba58169f3c1690cfa5d2eb84c1b01fe6f1c649fd059e"} Feb 28 14:36:03 crc kubenswrapper[4897]: I0228 14:36:03.889012 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538156-kplhd" event={"ID":"1b89291e-dab3-42b0-b9b9-8e0d24258cc7","Type":"ContainerDied","Data":"0ded59d84e5feafbc5ca229323d68badac9ee17c936280aca46dc94701eeaa08"} Feb 28 14:36:03 crc kubenswrapper[4897]: I0228 14:36:03.888943 4897 generic.go:334] "Generic (PLEG): container finished" podID="1b89291e-dab3-42b0-b9b9-8e0d24258cc7" containerID="0ded59d84e5feafbc5ca229323d68badac9ee17c936280aca46dc94701eeaa08" exitCode=0 Feb 28 14:36:05 crc kubenswrapper[4897]: I0228 14:36:05.379452 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538156-kplhd" Feb 28 14:36:05 crc kubenswrapper[4897]: I0228 14:36:05.586641 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmgq6\" (UniqueName: \"kubernetes.io/projected/1b89291e-dab3-42b0-b9b9-8e0d24258cc7-kube-api-access-zmgq6\") pod \"1b89291e-dab3-42b0-b9b9-8e0d24258cc7\" (UID: \"1b89291e-dab3-42b0-b9b9-8e0d24258cc7\") " Feb 28 14:36:05 crc kubenswrapper[4897]: I0228 14:36:05.600538 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b89291e-dab3-42b0-b9b9-8e0d24258cc7-kube-api-access-zmgq6" (OuterVolumeSpecName: "kube-api-access-zmgq6") pod "1b89291e-dab3-42b0-b9b9-8e0d24258cc7" (UID: "1b89291e-dab3-42b0-b9b9-8e0d24258cc7"). InnerVolumeSpecName "kube-api-access-zmgq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:36:05 crc kubenswrapper[4897]: I0228 14:36:05.697333 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmgq6\" (UniqueName: \"kubernetes.io/projected/1b89291e-dab3-42b0-b9b9-8e0d24258cc7-kube-api-access-zmgq6\") on node \"crc\" DevicePath \"\"" Feb 28 14:36:05 crc kubenswrapper[4897]: I0228 14:36:05.917657 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538156-kplhd" event={"ID":"1b89291e-dab3-42b0-b9b9-8e0d24258cc7","Type":"ContainerDied","Data":"3fde5f695d9b60fd7015ba58169f3c1690cfa5d2eb84c1b01fe6f1c649fd059e"} Feb 28 14:36:05 crc kubenswrapper[4897]: I0228 14:36:05.917723 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fde5f695d9b60fd7015ba58169f3c1690cfa5d2eb84c1b01fe6f1c649fd059e" Feb 28 14:36:05 crc kubenswrapper[4897]: I0228 14:36:05.917832 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538156-kplhd" Feb 28 14:36:06 crc kubenswrapper[4897]: I0228 14:36:06.490482 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538150-9rxzk"] Feb 28 14:36:06 crc kubenswrapper[4897]: I0228 14:36:06.507019 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538150-9rxzk"] Feb 28 14:36:08 crc kubenswrapper[4897]: I0228 14:36:08.482230 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5b0de5-b621-4aac-906d-ee2456262d86" path="/var/lib/kubelet/pods/2b5b0de5-b621-4aac-906d-ee2456262d86/volumes" Feb 28 14:36:13 crc kubenswrapper[4897]: E0228 14:36:13.436827 4897 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.164:50574->38.102.83.164:37321: write tcp 38.102.83.164:50574->38.102.83.164:37321: write: broken pipe Feb 28 14:36:33 crc kubenswrapper[4897]: I0228 14:36:33.371136 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:36:33 crc kubenswrapper[4897]: I0228 14:36:33.371697 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:37:03 crc kubenswrapper[4897]: I0228 14:37:03.371441 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:37:03 crc kubenswrapper[4897]: I0228 14:37:03.372072 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:37:33 crc kubenswrapper[4897]: I0228 14:37:33.371450 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:37:33 crc kubenswrapper[4897]: I0228 14:37:33.371974 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:37:33 crc kubenswrapper[4897]: I0228 14:37:33.372019 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:37:33 crc kubenswrapper[4897]: I0228 14:37:33.372879 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9cdd3f2d51f367992d3757825c9f8875fdc1e548ec99ae88e80183b05259b62"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:37:33 crc kubenswrapper[4897]: I0228 14:37:33.372933 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://e9cdd3f2d51f367992d3757825c9f8875fdc1e548ec99ae88e80183b05259b62" gracePeriod=600 Feb 28 14:37:34 crc kubenswrapper[4897]: I0228 14:37:34.063158 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="e9cdd3f2d51f367992d3757825c9f8875fdc1e548ec99ae88e80183b05259b62" exitCode=0 Feb 28 14:37:34 crc kubenswrapper[4897]: I0228 14:37:34.063236 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"e9cdd3f2d51f367992d3757825c9f8875fdc1e548ec99ae88e80183b05259b62"} Feb 28 14:37:34 crc kubenswrapper[4897]: I0228 14:37:34.063931 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99"} Feb 28 14:37:34 crc kubenswrapper[4897]: I0228 14:37:34.063963 4897 scope.go:117] "RemoveContainer" containerID="107c1d637bec2c46fac9478528cdf86a8ecfe0eb570742ec86168e20c7b24d89" Feb 28 14:37:49 crc kubenswrapper[4897]: I0228 14:37:49.706950 4897 scope.go:117] "RemoveContainer" containerID="1032d96ba745f0245ac1167d1d65bf9cbafc3a2bd0be6172209f05591c09b5b5" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.185029 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538158-vhhjs"] Feb 28 14:38:00 crc kubenswrapper[4897]: E0228 14:38:00.187241 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b89291e-dab3-42b0-b9b9-8e0d24258cc7" containerName="oc" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.187262 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b89291e-dab3-42b0-b9b9-8e0d24258cc7" containerName="oc" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.187543 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b89291e-dab3-42b0-b9b9-8e0d24258cc7" containerName="oc" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.188650 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.193005 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.193277 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.193392 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.204972 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538158-vhhjs"] Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.293217 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgn7c\" (UniqueName: \"kubernetes.io/projected/d2b716ea-4f21-47ff-a1c8-7b7796197ed6-kube-api-access-wgn7c\") pod \"auto-csr-approver-29538158-vhhjs\" (UID: \"d2b716ea-4f21-47ff-a1c8-7b7796197ed6\") " pod="openshift-infra/auto-csr-approver-29538158-vhhjs" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.398810 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgn7c\" (UniqueName: \"kubernetes.io/projected/d2b716ea-4f21-47ff-a1c8-7b7796197ed6-kube-api-access-wgn7c\") pod \"auto-csr-approver-29538158-vhhjs\" (UID: \"d2b716ea-4f21-47ff-a1c8-7b7796197ed6\") " pod="openshift-infra/auto-csr-approver-29538158-vhhjs" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.439841 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgn7c\" (UniqueName: \"kubernetes.io/projected/d2b716ea-4f21-47ff-a1c8-7b7796197ed6-kube-api-access-wgn7c\") pod \"auto-csr-approver-29538158-vhhjs\" (UID: \"d2b716ea-4f21-47ff-a1c8-7b7796197ed6\") " pod="openshift-infra/auto-csr-approver-29538158-vhhjs" Feb 28 14:38:00 crc kubenswrapper[4897]: I0228 14:38:00.524536 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" Feb 28 14:38:01 crc kubenswrapper[4897]: I0228 14:38:01.056436 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538158-vhhjs"] Feb 28 14:38:01 crc kubenswrapper[4897]: I0228 14:38:01.414362 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" event={"ID":"d2b716ea-4f21-47ff-a1c8-7b7796197ed6","Type":"ContainerStarted","Data":"991a6b946a386cc2b806a6af92d5c71a200af986c5ffd513e78303f4256c0b47"} Feb 28 14:38:02 crc kubenswrapper[4897]: I0228 14:38:02.430275 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" event={"ID":"d2b716ea-4f21-47ff-a1c8-7b7796197ed6","Type":"ContainerStarted","Data":"058d9a3c0f755a325b34752328bda2512c8a9c83d89bf8140a075599f0df06f8"} Feb 28 14:38:02 crc kubenswrapper[4897]: I0228 14:38:02.461263 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" podStartSLOduration=1.564789954 podStartE2EDuration="2.461233068s" podCreationTimestamp="2026-02-28 14:38:00 +0000 UTC" firstStartedPulling="2026-02-28 14:38:01.064591426 +0000 UTC m=+4895.306912093" lastFinishedPulling="2026-02-28 14:38:01.96103452 +0000 UTC m=+4896.203355207" observedRunningTime="2026-02-28 14:38:02.450711591 +0000 UTC m=+4896.693032248" watchObservedRunningTime="2026-02-28 14:38:02.461233068 +0000 UTC m=+4896.703553765" Feb 28 14:38:03 crc kubenswrapper[4897]: I0228 14:38:03.444339 4897 generic.go:334] "Generic (PLEG): container finished" podID="d2b716ea-4f21-47ff-a1c8-7b7796197ed6" containerID="058d9a3c0f755a325b34752328bda2512c8a9c83d89bf8140a075599f0df06f8" exitCode=0 Feb 28 14:38:03 crc kubenswrapper[4897]: I0228 14:38:03.444388 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" event={"ID":"d2b716ea-4f21-47ff-a1c8-7b7796197ed6","Type":"ContainerDied","Data":"058d9a3c0f755a325b34752328bda2512c8a9c83d89bf8140a075599f0df06f8"} Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.309769 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.418475 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgn7c\" (UniqueName: \"kubernetes.io/projected/d2b716ea-4f21-47ff-a1c8-7b7796197ed6-kube-api-access-wgn7c\") pod \"d2b716ea-4f21-47ff-a1c8-7b7796197ed6\" (UID: \"d2b716ea-4f21-47ff-a1c8-7b7796197ed6\") " Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.429096 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b716ea-4f21-47ff-a1c8-7b7796197ed6-kube-api-access-wgn7c" (OuterVolumeSpecName: "kube-api-access-wgn7c") pod "d2b716ea-4f21-47ff-a1c8-7b7796197ed6" (UID: "d2b716ea-4f21-47ff-a1c8-7b7796197ed6"). InnerVolumeSpecName "kube-api-access-wgn7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.470638 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" event={"ID":"d2b716ea-4f21-47ff-a1c8-7b7796197ed6","Type":"ContainerDied","Data":"991a6b946a386cc2b806a6af92d5c71a200af986c5ffd513e78303f4256c0b47"} Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.470811 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991a6b946a386cc2b806a6af92d5c71a200af986c5ffd513e78303f4256c0b47" Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.470719 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538158-vhhjs" Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.523476 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgn7c\" (UniqueName: \"kubernetes.io/projected/d2b716ea-4f21-47ff-a1c8-7b7796197ed6-kube-api-access-wgn7c\") on node \"crc\" DevicePath \"\"" Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.533376 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538152-nhj6s"] Feb 28 14:38:05 crc kubenswrapper[4897]: I0228 14:38:05.544767 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538152-nhj6s"] Feb 28 14:38:06 crc kubenswrapper[4897]: I0228 14:38:06.481049 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90dbd68b-a86b-4d5d-8abf-f2a8de88cde8" path="/var/lib/kubelet/pods/90dbd68b-a86b-4d5d-8abf-f2a8de88cde8/volumes" Feb 28 14:38:17 crc kubenswrapper[4897]: I0228 14:38:17.944717 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pmqqd"] Feb 28 14:38:17 crc kubenswrapper[4897]: E0228 14:38:17.945865 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b716ea-4f21-47ff-a1c8-7b7796197ed6" containerName="oc" Feb 28 14:38:17 crc kubenswrapper[4897]: I0228 14:38:17.945886 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b716ea-4f21-47ff-a1c8-7b7796197ed6" containerName="oc" Feb 28 14:38:17 crc kubenswrapper[4897]: I0228 14:38:17.946202 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b716ea-4f21-47ff-a1c8-7b7796197ed6" containerName="oc" Feb 28 14:38:17 crc kubenswrapper[4897]: I0228 14:38:17.948142 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:17 crc kubenswrapper[4897]: I0228 14:38:17.960147 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pmqqd"] Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.043187 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-catalog-content\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.043340 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-utilities\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.043403 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8drp\" (UniqueName: \"kubernetes.io/projected/c9c1cce8-8d21-41fa-b74c-2fd66d847893-kube-api-access-f8drp\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.145259 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-catalog-content\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.145371 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-utilities\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.145417 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8drp\" (UniqueName: \"kubernetes.io/projected/c9c1cce8-8d21-41fa-b74c-2fd66d847893-kube-api-access-f8drp\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.145972 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-utilities\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.146171 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-catalog-content\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.169159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8drp\" (UniqueName: \"kubernetes.io/projected/c9c1cce8-8d21-41fa-b74c-2fd66d847893-kube-api-access-f8drp\") pod \"certified-operators-pmqqd\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.277463 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:18 crc kubenswrapper[4897]: I0228 14:38:18.780315 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pmqqd"] Feb 28 14:38:19 crc kubenswrapper[4897]: I0228 14:38:19.672954 4897 generic.go:334] "Generic (PLEG): container finished" podID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerID="0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271" exitCode=0 Feb 28 14:38:19 crc kubenswrapper[4897]: I0228 14:38:19.673038 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmqqd" event={"ID":"c9c1cce8-8d21-41fa-b74c-2fd66d847893","Type":"ContainerDied","Data":"0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271"} Feb 28 14:38:19 crc kubenswrapper[4897]: I0228 14:38:19.673195 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmqqd" event={"ID":"c9c1cce8-8d21-41fa-b74c-2fd66d847893","Type":"ContainerStarted","Data":"1a3404e752bb707524f338ec3065a071936066ab8cbbf73a10bd7758635aec61"} Feb 28 14:38:19 crc kubenswrapper[4897]: I0228 14:38:19.675734 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:38:20 crc kubenswrapper[4897]: E0228 14:38:20.225465 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 14:38:20 crc kubenswrapper[4897]: E0228 14:38:20.225684 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8drp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pmqqd_openshift-marketplace(c9c1cce8-8d21-41fa-b74c-2fd66d847893): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:38:20 crc kubenswrapper[4897]: E0228 14:38:20.226928 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-pmqqd" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" Feb 28 14:38:20 crc kubenswrapper[4897]: E0228 14:38:20.686959 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pmqqd" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" Feb 28 14:38:33 crc kubenswrapper[4897]: I0228 14:38:33.851373 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmqqd" event={"ID":"c9c1cce8-8d21-41fa-b74c-2fd66d847893","Type":"ContainerStarted","Data":"69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f"} Feb 28 14:38:34 crc kubenswrapper[4897]: I0228 14:38:34.869858 4897 generic.go:334] "Generic (PLEG): container finished" podID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerID="69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f" exitCode=0 Feb 28 14:38:34 crc kubenswrapper[4897]: I0228 14:38:34.869966 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmqqd" event={"ID":"c9c1cce8-8d21-41fa-b74c-2fd66d847893","Type":"ContainerDied","Data":"69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f"} Feb 28 14:38:36 crc kubenswrapper[4897]: I0228 14:38:36.909566 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmqqd" event={"ID":"c9c1cce8-8d21-41fa-b74c-2fd66d847893","Type":"ContainerStarted","Data":"df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9"} Feb 28 14:38:38 crc kubenswrapper[4897]: I0228 14:38:38.277544 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:38 crc kubenswrapper[4897]: I0228 14:38:38.277910 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:39 crc kubenswrapper[4897]: I0228 14:38:39.360442 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pmqqd" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="registry-server" probeResult="failure" output=< Feb 28 14:38:39 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:38:39 crc kubenswrapper[4897]: > Feb 28 14:38:48 crc kubenswrapper[4897]: I0228 14:38:48.361735 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:48 crc kubenswrapper[4897]: I0228 14:38:48.395763 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pmqqd" podStartSLOduration=15.800764527 podStartE2EDuration="31.395737902s" podCreationTimestamp="2026-02-28 14:38:17 +0000 UTC" firstStartedPulling="2026-02-28 14:38:19.675284341 +0000 UTC m=+4913.917605028" lastFinishedPulling="2026-02-28 14:38:35.270257716 +0000 UTC m=+4929.512578403" observedRunningTime="2026-02-28 14:38:36.940247627 +0000 UTC m=+4931.182568314" watchObservedRunningTime="2026-02-28 14:38:48.395737902 +0000 UTC m=+4942.638058589" Feb 28 14:38:48 crc kubenswrapper[4897]: I0228 14:38:48.443880 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:49 crc kubenswrapper[4897]: I0228 14:38:49.162278 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pmqqd"] Feb 28 14:38:49 crc kubenswrapper[4897]: I0228 14:38:49.788210 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 14:38:49 crc kubenswrapper[4897]: I0228 14:38:49.789304 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.068817 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pmqqd" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="registry-server" containerID="cri-o://df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9" gracePeriod=2 Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.400922 4897 scope.go:117] "RemoveContainer" containerID="0acd8a3d2d3cf70df06a8315077113c2b85784b5e25bc0f65ca22ea56950ff84" Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.617032 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.675712 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-catalog-content\") pod \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.675826 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8drp\" (UniqueName: \"kubernetes.io/projected/c9c1cce8-8d21-41fa-b74c-2fd66d847893-kube-api-access-f8drp\") pod \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.675870 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-utilities\") pod \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\" (UID: \"c9c1cce8-8d21-41fa-b74c-2fd66d847893\") " Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.677076 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-utilities" (OuterVolumeSpecName: "utilities") pod "c9c1cce8-8d21-41fa-b74c-2fd66d847893" (UID: "c9c1cce8-8d21-41fa-b74c-2fd66d847893"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.682264 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c1cce8-8d21-41fa-b74c-2fd66d847893-kube-api-access-f8drp" (OuterVolumeSpecName: "kube-api-access-f8drp") pod "c9c1cce8-8d21-41fa-b74c-2fd66d847893" (UID: "c9c1cce8-8d21-41fa-b74c-2fd66d847893"). InnerVolumeSpecName "kube-api-access-f8drp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.755435 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9c1cce8-8d21-41fa-b74c-2fd66d847893" (UID: "c9c1cce8-8d21-41fa-b74c-2fd66d847893"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.778005 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.778037 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8drp\" (UniqueName: \"kubernetes.io/projected/c9c1cce8-8d21-41fa-b74c-2fd66d847893-kube-api-access-f8drp\") on node \"crc\" DevicePath \"\"" Feb 28 14:38:50 crc kubenswrapper[4897]: I0228 14:38:50.778048 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c1cce8-8d21-41fa-b74c-2fd66d847893-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.087165 4897 generic.go:334] "Generic (PLEG): container finished" podID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerID="df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9" exitCode=0 Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.087231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmqqd" event={"ID":"c9c1cce8-8d21-41fa-b74c-2fd66d847893","Type":"ContainerDied","Data":"df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9"} Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.087271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmqqd" event={"ID":"c9c1cce8-8d21-41fa-b74c-2fd66d847893","Type":"ContainerDied","Data":"1a3404e752bb707524f338ec3065a071936066ab8cbbf73a10bd7758635aec61"} Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.087272 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmqqd" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.087476 4897 scope.go:117] "RemoveContainer" containerID="df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.124522 4897 scope.go:117] "RemoveContainer" containerID="69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.149652 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pmqqd"] Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.162396 4897 scope.go:117] "RemoveContainer" containerID="0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.166183 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pmqqd"] Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.196668 4897 scope.go:117] "RemoveContainer" containerID="df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9" Feb 28 14:38:51 crc kubenswrapper[4897]: E0228 14:38:51.197208 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9\": container with ID starting with df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9 not found: ID does not exist" containerID="df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.197264 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9"} err="failed to get container status \"df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9\": rpc error: code = NotFound desc = could not find container \"df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9\": container with ID starting with df8765a11d4069c4a1a9f666a2f258613bb0dffb8b1d40e89f5ac924549551c9 not found: ID does not exist" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.197302 4897 scope.go:117] "RemoveContainer" containerID="69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f" Feb 28 14:38:51 crc kubenswrapper[4897]: E0228 14:38:51.197943 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f\": container with ID starting with 69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f not found: ID does not exist" containerID="69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.198018 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f"} err="failed to get container status \"69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f\": rpc error: code = NotFound desc = could not find container \"69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f\": container with ID starting with 69d3f6008e6e498355843e3f872118477e716b3fa953bc9e05e0e699dff6644f not found: ID does not exist" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.198062 4897 scope.go:117] "RemoveContainer" containerID="0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271" Feb 28 14:38:51 crc kubenswrapper[4897]: E0228 14:38:51.198842 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271\": container with ID starting with 0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271 not found: ID does not exist" containerID="0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271" Feb 28 14:38:51 crc kubenswrapper[4897]: I0228 14:38:51.198922 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271"} err="failed to get container status \"0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271\": rpc error: code = NotFound desc = could not find container \"0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271\": container with ID starting with 0dc8ea328f59ef1dbf7b67b5e8ae9f3f39915f42988190c979f1a65c6b1df271 not found: ID does not exist" Feb 28 14:38:52 crc kubenswrapper[4897]: I0228 14:38:52.476645 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" path="/var/lib/kubelet/pods/c9c1cce8-8d21-41fa-b74c-2fd66d847893/volumes" Feb 28 14:39:33 crc kubenswrapper[4897]: I0228 14:39:33.371221 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:39:33 crc kubenswrapper[4897]: I0228 14:39:33.371910 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.171795 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538160-mjvdr"] Feb 28 14:40:00 crc kubenswrapper[4897]: E0228 14:40:00.173204 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="extract-utilities" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.173228 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="extract-utilities" Feb 28 14:40:00 crc kubenswrapper[4897]: E0228 14:40:00.173279 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="registry-server" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.173293 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="registry-server" Feb 28 14:40:00 crc kubenswrapper[4897]: E0228 14:40:00.173377 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="extract-content" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.173394 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="extract-content" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.173786 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c1cce8-8d21-41fa-b74c-2fd66d847893" containerName="registry-server" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.175051 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.183020 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.187682 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.188016 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.189341 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538160-mjvdr"] Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.320346 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbvw\" (UniqueName: \"kubernetes.io/projected/56142008-17ca-4caf-90f8-6588f0d2cec2-kube-api-access-hqbvw\") pod \"auto-csr-approver-29538160-mjvdr\" (UID: \"56142008-17ca-4caf-90f8-6588f0d2cec2\") " pod="openshift-infra/auto-csr-approver-29538160-mjvdr" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.422573 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqbvw\" (UniqueName: \"kubernetes.io/projected/56142008-17ca-4caf-90f8-6588f0d2cec2-kube-api-access-hqbvw\") pod \"auto-csr-approver-29538160-mjvdr\" (UID: \"56142008-17ca-4caf-90f8-6588f0d2cec2\") " pod="openshift-infra/auto-csr-approver-29538160-mjvdr" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.792295 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqbvw\" (UniqueName: \"kubernetes.io/projected/56142008-17ca-4caf-90f8-6588f0d2cec2-kube-api-access-hqbvw\") pod \"auto-csr-approver-29538160-mjvdr\" (UID: \"56142008-17ca-4caf-90f8-6588f0d2cec2\") " pod="openshift-infra/auto-csr-approver-29538160-mjvdr" Feb 28 14:40:00 crc kubenswrapper[4897]: I0228 14:40:00.814964 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" Feb 28 14:40:01 crc kubenswrapper[4897]: I0228 14:40:01.325903 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538160-mjvdr"] Feb 28 14:40:01 crc kubenswrapper[4897]: I0228 14:40:01.902834 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" event={"ID":"56142008-17ca-4caf-90f8-6588f0d2cec2","Type":"ContainerStarted","Data":"091aa374ca8e5ae4c6ea35c59c5d7c5142319f7cefb65b214a1e1555a3029ec9"} Feb 28 14:40:02 crc kubenswrapper[4897]: E0228 14:40:02.802120 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:40:02 crc kubenswrapper[4897]: E0228 14:40:02.802445 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:40:02 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:40:02 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqbvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538160-mjvdr_openshift-infra(56142008-17ca-4caf-90f8-6588f0d2cec2): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:40:02 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:40:02 crc kubenswrapper[4897]: E0228 14:40:02.803567 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" podUID="56142008-17ca-4caf-90f8-6588f0d2cec2" Feb 28 14:40:02 crc kubenswrapper[4897]: E0228 14:40:02.917241 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" podUID="56142008-17ca-4caf-90f8-6588f0d2cec2" Feb 28 14:40:03 crc kubenswrapper[4897]: I0228 14:40:03.370846 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:40:03 crc kubenswrapper[4897]: I0228 14:40:03.370937 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:40:18 crc kubenswrapper[4897]: E0228 14:40:18.271058 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:40:18 crc kubenswrapper[4897]: E0228 14:40:18.271955 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:40:18 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:40:18 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqbvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538160-mjvdr_openshift-infra(56142008-17ca-4caf-90f8-6588f0d2cec2): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:40:18 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:40:18 crc kubenswrapper[4897]: E0228 14:40:18.273245 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" podUID="56142008-17ca-4caf-90f8-6588f0d2cec2" Feb 28 14:40:32 crc kubenswrapper[4897]: E0228 14:40:32.460021 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" podUID="56142008-17ca-4caf-90f8-6588f0d2cec2" Feb 28 14:40:33 crc kubenswrapper[4897]: I0228 14:40:33.370842 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:40:33 crc kubenswrapper[4897]: I0228 14:40:33.371159 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:40:33 crc kubenswrapper[4897]: I0228 14:40:33.371209 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:40:33 crc kubenswrapper[4897]: I0228 14:40:33.372043 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:40:33 crc kubenswrapper[4897]: I0228 14:40:33.372116 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" gracePeriod=600 Feb 28 14:40:33 crc kubenswrapper[4897]: E0228 14:40:33.509295 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:40:34 crc kubenswrapper[4897]: I0228 14:40:34.291193 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" exitCode=0 Feb 28 14:40:34 crc kubenswrapper[4897]: I0228 14:40:34.291283 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99"} Feb 28 14:40:34 crc kubenswrapper[4897]: I0228 14:40:34.291685 4897 scope.go:117] "RemoveContainer" containerID="e9cdd3f2d51f367992d3757825c9f8875fdc1e548ec99ae88e80183b05259b62" Feb 28 14:40:34 crc kubenswrapper[4897]: I0228 14:40:34.292898 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:40:34 crc kubenswrapper[4897]: E0228 14:40:34.295736 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:40:45 crc kubenswrapper[4897]: I0228 14:40:45.457017 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:40:45 crc kubenswrapper[4897]: E0228 14:40:45.458202 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:40:49 crc kubenswrapper[4897]: I0228 14:40:49.485092 4897 generic.go:334] "Generic (PLEG): container finished" podID="56142008-17ca-4caf-90f8-6588f0d2cec2" containerID="2c99ced080513ff6c4b03807890473e00c8be7a3315f7a11fecbe39c33520fb6" exitCode=0 Feb 28 14:40:49 crc kubenswrapper[4897]: I0228 14:40:49.485235 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" event={"ID":"56142008-17ca-4caf-90f8-6588f0d2cec2","Type":"ContainerDied","Data":"2c99ced080513ff6c4b03807890473e00c8be7a3315f7a11fecbe39c33520fb6"} Feb 28 14:40:50 crc kubenswrapper[4897]: I0228 14:40:50.837205 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" Feb 28 14:40:50 crc kubenswrapper[4897]: I0228 14:40:50.949915 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqbvw\" (UniqueName: \"kubernetes.io/projected/56142008-17ca-4caf-90f8-6588f0d2cec2-kube-api-access-hqbvw\") pod \"56142008-17ca-4caf-90f8-6588f0d2cec2\" (UID: \"56142008-17ca-4caf-90f8-6588f0d2cec2\") " Feb 28 14:40:50 crc kubenswrapper[4897]: I0228 14:40:50.956707 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56142008-17ca-4caf-90f8-6588f0d2cec2-kube-api-access-hqbvw" (OuterVolumeSpecName: "kube-api-access-hqbvw") pod "56142008-17ca-4caf-90f8-6588f0d2cec2" (UID: "56142008-17ca-4caf-90f8-6588f0d2cec2"). InnerVolumeSpecName "kube-api-access-hqbvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:40:51 crc kubenswrapper[4897]: I0228 14:40:51.052589 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqbvw\" (UniqueName: \"kubernetes.io/projected/56142008-17ca-4caf-90f8-6588f0d2cec2-kube-api-access-hqbvw\") on node \"crc\" DevicePath \"\"" Feb 28 14:40:51 crc kubenswrapper[4897]: I0228 14:40:51.514059 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" event={"ID":"56142008-17ca-4caf-90f8-6588f0d2cec2","Type":"ContainerDied","Data":"091aa374ca8e5ae4c6ea35c59c5d7c5142319f7cefb65b214a1e1555a3029ec9"} Feb 28 14:40:51 crc kubenswrapper[4897]: I0228 14:40:51.514101 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="091aa374ca8e5ae4c6ea35c59c5d7c5142319f7cefb65b214a1e1555a3029ec9" Feb 28 14:40:51 crc kubenswrapper[4897]: I0228 14:40:51.514161 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538160-mjvdr" Feb 28 14:40:51 crc kubenswrapper[4897]: I0228 14:40:51.919277 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538154-flq5h"] Feb 28 14:40:51 crc kubenswrapper[4897]: I0228 14:40:51.928089 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538154-flq5h"] Feb 28 14:40:52 crc kubenswrapper[4897]: I0228 14:40:52.472401 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc826503-5c33-4f4f-90a4-dbf79bc0f893" path="/var/lib/kubelet/pods/bc826503-5c33-4f4f-90a4-dbf79bc0f893/volumes" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.307395 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjv4f"] Feb 28 14:40:53 crc kubenswrapper[4897]: E0228 14:40:53.308391 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56142008-17ca-4caf-90f8-6588f0d2cec2" containerName="oc" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.308415 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="56142008-17ca-4caf-90f8-6588f0d2cec2" containerName="oc" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.308764 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="56142008-17ca-4caf-90f8-6588f0d2cec2" containerName="oc" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.311269 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.322171 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjv4f"] Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.400027 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-catalog-content\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.400114 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-utilities\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.400258 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlp82\" (UniqueName: \"kubernetes.io/projected/bdaba5c6-34b9-4255-9787-ae9213f11063-kube-api-access-hlp82\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.500982 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-utilities\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.501069 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlp82\" (UniqueName: \"kubernetes.io/projected/bdaba5c6-34b9-4255-9787-ae9213f11063-kube-api-access-hlp82\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.501238 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-catalog-content\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.502073 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-catalog-content\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.502056 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-utilities\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.522206 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlp82\" (UniqueName: \"kubernetes.io/projected/bdaba5c6-34b9-4255-9787-ae9213f11063-kube-api-access-hlp82\") pod \"redhat-marketplace-rjv4f\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:53 crc kubenswrapper[4897]: I0228 14:40:53.634951 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:40:54 crc kubenswrapper[4897]: I0228 14:40:54.080039 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjv4f"] Feb 28 14:40:54 crc kubenswrapper[4897]: I0228 14:40:54.586741 4897 generic.go:334] "Generic (PLEG): container finished" podID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerID="476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd" exitCode=0 Feb 28 14:40:54 crc kubenswrapper[4897]: I0228 14:40:54.587774 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjv4f" event={"ID":"bdaba5c6-34b9-4255-9787-ae9213f11063","Type":"ContainerDied","Data":"476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd"} Feb 28 14:40:54 crc kubenswrapper[4897]: I0228 14:40:54.587851 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjv4f" event={"ID":"bdaba5c6-34b9-4255-9787-ae9213f11063","Type":"ContainerStarted","Data":"b5713d58614cd10f190e3174fc338050c33b65913703bf1b274fa85893ab0d64"} Feb 28 14:40:55 crc kubenswrapper[4897]: E0228 14:40:55.267439 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:40:55 crc kubenswrapper[4897]: E0228 14:40:55.267688 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlp82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rjv4f_openshift-marketplace(bdaba5c6-34b9-4255-9787-ae9213f11063): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:40:55 crc kubenswrapper[4897]: E0228 14:40:55.269476 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" Feb 28 14:40:55 crc kubenswrapper[4897]: E0228 14:40:55.602331 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" Feb 28 14:40:57 crc kubenswrapper[4897]: I0228 14:40:57.457435 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:40:57 crc kubenswrapper[4897]: E0228 14:40:57.457977 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:40:59 crc kubenswrapper[4897]: I0228 14:40:59.787967 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 14:40:59 crc kubenswrapper[4897]: I0228 14:40:59.789599 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 14:41:09 crc kubenswrapper[4897]: E0228 14:41:09.137284 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:41:09 crc kubenswrapper[4897]: E0228 14:41:09.137916 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlp82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rjv4f_openshift-marketplace(bdaba5c6-34b9-4255-9787-ae9213f11063): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:41:09 crc kubenswrapper[4897]: E0228 14:41:09.139407 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" Feb 28 14:41:11 crc kubenswrapper[4897]: I0228 14:41:11.457293 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:41:11 crc kubenswrapper[4897]: E0228 14:41:11.458832 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:41:21 crc kubenswrapper[4897]: E0228 14:41:21.459353 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" Feb 28 14:41:24 crc kubenswrapper[4897]: I0228 14:41:24.457367 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:41:24 crc kubenswrapper[4897]: E0228 14:41:24.458655 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:41:33 crc kubenswrapper[4897]: E0228 14:41:33.962083 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 14:41:33 crc kubenswrapper[4897]: E0228 14:41:33.962939 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlp82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rjv4f_openshift-marketplace(bdaba5c6-34b9-4255-9787-ae9213f11063): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:41:33 crc kubenswrapper[4897]: E0228 14:41:33.964210 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=4c1235c462902350224ff1dfc90bd9f47b4fe1b4d05c3d5fc542c5c8e38ee80a/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" Feb 28 14:41:37 crc kubenswrapper[4897]: I0228 14:41:37.457141 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:41:37 crc kubenswrapper[4897]: E0228 14:41:37.461073 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:41:48 crc kubenswrapper[4897]: E0228 14:41:48.461295 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" Feb 28 14:41:49 crc kubenswrapper[4897]: I0228 14:41:49.457063 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:41:49 crc kubenswrapper[4897]: E0228 14:41:49.457556 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:41:50 crc kubenswrapper[4897]: I0228 14:41:50.586029 4897 scope.go:117] "RemoveContainer" containerID="b2204929c5b135e1ab83393022bb15967fdcdd41e3f6f6eec84d5cadd7f0dd19" Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.158779 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538162-kmcqc"] Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.162442 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.167778 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.168188 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.168431 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.169493 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538162-kmcqc"] Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.246760 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k42k\" (UniqueName: \"kubernetes.io/projected/18345f62-4936-476c-85bf-2634b07217b7-kube-api-access-8k42k\") pod \"auto-csr-approver-29538162-kmcqc\" (UID: \"18345f62-4936-476c-85bf-2634b07217b7\") " pod="openshift-infra/auto-csr-approver-29538162-kmcqc" Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.348787 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k42k\" (UniqueName: \"kubernetes.io/projected/18345f62-4936-476c-85bf-2634b07217b7-kube-api-access-8k42k\") pod \"auto-csr-approver-29538162-kmcqc\" (UID: \"18345f62-4936-476c-85bf-2634b07217b7\") " pod="openshift-infra/auto-csr-approver-29538162-kmcqc" Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.382272 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k42k\" (UniqueName: \"kubernetes.io/projected/18345f62-4936-476c-85bf-2634b07217b7-kube-api-access-8k42k\") pod \"auto-csr-approver-29538162-kmcqc\" (UID: \"18345f62-4936-476c-85bf-2634b07217b7\") " pod="openshift-infra/auto-csr-approver-29538162-kmcqc" Feb 28 14:42:00 crc kubenswrapper[4897]: I0228 14:42:00.506277 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" Feb 28 14:42:01 crc kubenswrapper[4897]: I0228 14:42:01.220419 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538162-kmcqc"] Feb 28 14:42:01 crc kubenswrapper[4897]: I0228 14:42:01.428239 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" event={"ID":"18345f62-4936-476c-85bf-2634b07217b7","Type":"ContainerStarted","Data":"7494f1d2a2ea4c0d7dd0faf2aa68a79b62ed4aba0b9f677126abfa7ea2ab31fd"} Feb 28 14:42:01 crc kubenswrapper[4897]: I0228 14:42:01.457683 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:42:01 crc kubenswrapper[4897]: E0228 14:42:01.458193 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:42:01 crc kubenswrapper[4897]: E0228 14:42:01.459502 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" Feb 28 14:42:02 crc kubenswrapper[4897]: E0228 14:42:02.164936 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:42:02 crc kubenswrapper[4897]: E0228 14:42:02.165372 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:42:02 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:42:02 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8k42k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538162-kmcqc_openshift-infra(18345f62-4936-476c-85bf-2634b07217b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:42:02 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:42:02 crc kubenswrapper[4897]: E0228 14:42:02.166625 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" podUID="18345f62-4936-476c-85bf-2634b07217b7" Feb 28 14:42:02 crc kubenswrapper[4897]: E0228 14:42:02.457452 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" podUID="18345f62-4936-476c-85bf-2634b07217b7" Feb 28 14:42:12 crc kubenswrapper[4897]: E0228 14:42:12.459075 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" Feb 28 14:42:13 crc kubenswrapper[4897]: I0228 14:42:13.457184 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:42:13 crc kubenswrapper[4897]: E0228 14:42:13.458103 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:42:14 crc kubenswrapper[4897]: E0228 14:42:14.358251 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:42:14 crc kubenswrapper[4897]: E0228 14:42:14.359680 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:42:14 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:42:14 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8k42k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538162-kmcqc_openshift-infra(18345f62-4936-476c-85bf-2634b07217b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:42:14 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:42:14 crc kubenswrapper[4897]: E0228 14:42:14.361788 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" podUID="18345f62-4936-476c-85bf-2634b07217b7" Feb 28 14:42:25 crc kubenswrapper[4897]: I0228 14:42:25.457511 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:42:25 crc kubenswrapper[4897]: E0228 14:42:25.458996 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:42:25 crc kubenswrapper[4897]: E0228 14:42:25.461280 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" podUID="18345f62-4936-476c-85bf-2634b07217b7" Feb 28 14:42:28 crc kubenswrapper[4897]: I0228 14:42:28.759252 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjv4f" event={"ID":"bdaba5c6-34b9-4255-9787-ae9213f11063","Type":"ContainerStarted","Data":"4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250"} Feb 28 14:42:29 crc kubenswrapper[4897]: I0228 14:42:29.773917 4897 generic.go:334] "Generic (PLEG): container finished" podID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerID="4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250" exitCode=0 Feb 28 14:42:29 crc kubenswrapper[4897]: I0228 14:42:29.773964 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjv4f" event={"ID":"bdaba5c6-34b9-4255-9787-ae9213f11063","Type":"ContainerDied","Data":"4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250"} Feb 28 14:42:30 crc kubenswrapper[4897]: I0228 14:42:30.787728 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjv4f" event={"ID":"bdaba5c6-34b9-4255-9787-ae9213f11063","Type":"ContainerStarted","Data":"79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97"} Feb 28 14:42:30 crc kubenswrapper[4897]: I0228 14:42:30.817594 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjv4f" podStartSLOduration=2.013131927 podStartE2EDuration="1m37.817556343s" podCreationTimestamp="2026-02-28 14:40:53 +0000 UTC" firstStartedPulling="2026-02-28 14:40:54.593430609 +0000 UTC m=+5068.835751266" lastFinishedPulling="2026-02-28 14:42:30.397855015 +0000 UTC m=+5164.640175682" observedRunningTime="2026-02-28 14:42:30.813164379 +0000 UTC m=+5165.055485056" watchObservedRunningTime="2026-02-28 14:42:30.817556343 +0000 UTC m=+5165.059877050" Feb 28 14:42:33 crc kubenswrapper[4897]: I0228 14:42:33.635267 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:42:33 crc kubenswrapper[4897]: I0228 14:42:33.635845 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:42:34 crc kubenswrapper[4897]: I0228 14:42:34.693609 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="registry-server" probeResult="failure" output=< Feb 28 14:42:34 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:42:34 crc kubenswrapper[4897]: > Feb 28 14:42:37 crc kubenswrapper[4897]: I0228 14:42:37.456868 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:42:37 crc kubenswrapper[4897]: E0228 14:42:37.457763 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:42:39 crc kubenswrapper[4897]: I0228 14:42:39.925851 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" event={"ID":"18345f62-4936-476c-85bf-2634b07217b7","Type":"ContainerStarted","Data":"99c0995b3c8219487b31033a12014b5e280b463ccb575a67247d20561f43212e"} Feb 28 14:42:39 crc kubenswrapper[4897]: I0228 14:42:39.947517 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" podStartSLOduration=1.753564983 podStartE2EDuration="39.947499759s" podCreationTimestamp="2026-02-28 14:42:00 +0000 UTC" firstStartedPulling="2026-02-28 14:42:01.228020942 +0000 UTC m=+5135.470341629" lastFinishedPulling="2026-02-28 14:42:39.421955708 +0000 UTC m=+5173.664276405" observedRunningTime="2026-02-28 14:42:39.944523695 +0000 UTC m=+5174.186844412" watchObservedRunningTime="2026-02-28 14:42:39.947499759 +0000 UTC m=+5174.189820416" Feb 28 14:42:40 crc kubenswrapper[4897]: I0228 14:42:40.943140 4897 generic.go:334] "Generic (PLEG): container finished" podID="18345f62-4936-476c-85bf-2634b07217b7" containerID="99c0995b3c8219487b31033a12014b5e280b463ccb575a67247d20561f43212e" exitCode=0 Feb 28 14:42:40 crc kubenswrapper[4897]: I0228 14:42:40.943222 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" event={"ID":"18345f62-4936-476c-85bf-2634b07217b7","Type":"ContainerDied","Data":"99c0995b3c8219487b31033a12014b5e280b463ccb575a67247d20561f43212e"} Feb 28 14:42:42 crc kubenswrapper[4897]: I0228 14:42:42.422289 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" Feb 28 14:42:42 crc kubenswrapper[4897]: I0228 14:42:42.583321 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k42k\" (UniqueName: \"kubernetes.io/projected/18345f62-4936-476c-85bf-2634b07217b7-kube-api-access-8k42k\") pod \"18345f62-4936-476c-85bf-2634b07217b7\" (UID: \"18345f62-4936-476c-85bf-2634b07217b7\") " Feb 28 14:42:42 crc kubenswrapper[4897]: I0228 14:42:42.590442 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18345f62-4936-476c-85bf-2634b07217b7-kube-api-access-8k42k" (OuterVolumeSpecName: "kube-api-access-8k42k") pod "18345f62-4936-476c-85bf-2634b07217b7" (UID: "18345f62-4936-476c-85bf-2634b07217b7"). InnerVolumeSpecName "kube-api-access-8k42k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:42:42 crc kubenswrapper[4897]: I0228 14:42:42.686088 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k42k\" (UniqueName: \"kubernetes.io/projected/18345f62-4936-476c-85bf-2634b07217b7-kube-api-access-8k42k\") on node \"crc\" DevicePath \"\"" Feb 28 14:42:42 crc kubenswrapper[4897]: I0228 14:42:42.963043 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" event={"ID":"18345f62-4936-476c-85bf-2634b07217b7","Type":"ContainerDied","Data":"7494f1d2a2ea4c0d7dd0faf2aa68a79b62ed4aba0b9f677126abfa7ea2ab31fd"} Feb 28 14:42:42 crc kubenswrapper[4897]: I0228 14:42:42.963095 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7494f1d2a2ea4c0d7dd0faf2aa68a79b62ed4aba0b9f677126abfa7ea2ab31fd" Feb 28 14:42:42 crc kubenswrapper[4897]: I0228 14:42:42.963130 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538162-kmcqc" Feb 28 14:42:43 crc kubenswrapper[4897]: I0228 14:42:43.016980 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538156-kplhd"] Feb 28 14:42:43 crc kubenswrapper[4897]: I0228 14:42:43.024807 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538156-kplhd"] Feb 28 14:42:44 crc kubenswrapper[4897]: I0228 14:42:44.469590 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b89291e-dab3-42b0-b9b9-8e0d24258cc7" path="/var/lib/kubelet/pods/1b89291e-dab3-42b0-b9b9-8e0d24258cc7/volumes" Feb 28 14:42:44 crc kubenswrapper[4897]: I0228 14:42:44.543525 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:42:44 crc kubenswrapper[4897]: I0228 14:42:44.610927 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:42:44 crc kubenswrapper[4897]: I0228 14:42:44.794730 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjv4f"] Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.007228 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjv4f" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="registry-server" containerID="cri-o://79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97" gracePeriod=2 Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.578369 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.675209 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlp82\" (UniqueName: \"kubernetes.io/projected/bdaba5c6-34b9-4255-9787-ae9213f11063-kube-api-access-hlp82\") pod \"bdaba5c6-34b9-4255-9787-ae9213f11063\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.675611 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-catalog-content\") pod \"bdaba5c6-34b9-4255-9787-ae9213f11063\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.675686 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-utilities\") pod \"bdaba5c6-34b9-4255-9787-ae9213f11063\" (UID: \"bdaba5c6-34b9-4255-9787-ae9213f11063\") " Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.676761 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-utilities" (OuterVolumeSpecName: "utilities") pod "bdaba5c6-34b9-4255-9787-ae9213f11063" (UID: "bdaba5c6-34b9-4255-9787-ae9213f11063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.684939 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdaba5c6-34b9-4255-9787-ae9213f11063-kube-api-access-hlp82" (OuterVolumeSpecName: "kube-api-access-hlp82") pod "bdaba5c6-34b9-4255-9787-ae9213f11063" (UID: "bdaba5c6-34b9-4255-9787-ae9213f11063"). InnerVolumeSpecName "kube-api-access-hlp82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.713093 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdaba5c6-34b9-4255-9787-ae9213f11063" (UID: "bdaba5c6-34b9-4255-9787-ae9213f11063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.779140 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.779189 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdaba5c6-34b9-4255-9787-ae9213f11063-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:42:46 crc kubenswrapper[4897]: I0228 14:42:46.779211 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlp82\" (UniqueName: \"kubernetes.io/projected/bdaba5c6-34b9-4255-9787-ae9213f11063-kube-api-access-hlp82\") on node \"crc\" DevicePath \"\"" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.022761 4897 generic.go:334] "Generic (PLEG): container finished" podID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerID="79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97" exitCode=0 Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.022837 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjv4f" event={"ID":"bdaba5c6-34b9-4255-9787-ae9213f11063","Type":"ContainerDied","Data":"79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97"} Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.022881 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjv4f" event={"ID":"bdaba5c6-34b9-4255-9787-ae9213f11063","Type":"ContainerDied","Data":"b5713d58614cd10f190e3174fc338050c33b65913703bf1b274fa85893ab0d64"} Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.022912 4897 scope.go:117] "RemoveContainer" containerID="79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.025448 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjv4f" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.067464 4897 scope.go:117] "RemoveContainer" containerID="4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.074585 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjv4f"] Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.096385 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjv4f"] Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.122340 4897 scope.go:117] "RemoveContainer" containerID="476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.158719 4897 scope.go:117] "RemoveContainer" containerID="79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97" Feb 28 14:42:47 crc kubenswrapper[4897]: E0228 14:42:47.159292 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97\": container with ID starting with 79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97 not found: ID does not exist" containerID="79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.159385 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97"} err="failed to get container status \"79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97\": rpc error: code = NotFound desc = could not find container \"79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97\": container with ID starting with 79be1cf22946da044d851c583a022e789c3ae74dbbe864d9dad9fe57b8f5eb97 not found: ID does not exist" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.159421 4897 scope.go:117] "RemoveContainer" containerID="4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250" Feb 28 14:42:47 crc kubenswrapper[4897]: E0228 14:42:47.159693 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250\": container with ID starting with 4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250 not found: ID does not exist" containerID="4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.159745 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250"} err="failed to get container status \"4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250\": rpc error: code = NotFound desc = could not find container \"4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250\": container with ID starting with 4beeebac772a6bd8c57e278ea89b7208ae46d75e3feeeaeab7640d209da07250 not found: ID does not exist" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.159776 4897 scope.go:117] "RemoveContainer" containerID="476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd" Feb 28 14:42:47 crc kubenswrapper[4897]: E0228 14:42:47.160376 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd\": container with ID starting with 476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd not found: ID does not exist" containerID="476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd" Feb 28 14:42:47 crc kubenswrapper[4897]: I0228 14:42:47.160421 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd"} err="failed to get container status \"476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd\": rpc error: code = NotFound desc = could not find container \"476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd\": container with ID starting with 476863c37d10e21ceb3a809c3f73420e45b40f44cecc664096515841d5e9e0cd not found: ID does not exist" Feb 28 14:42:48 crc kubenswrapper[4897]: I0228 14:42:48.472758 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" path="/var/lib/kubelet/pods/bdaba5c6-34b9-4255-9787-ae9213f11063/volumes" Feb 28 14:42:50 crc kubenswrapper[4897]: I0228 14:42:50.699477 4897 scope.go:117] "RemoveContainer" containerID="0ded59d84e5feafbc5ca229323d68badac9ee17c936280aca46dc94701eeaa08" Feb 28 14:42:51 crc kubenswrapper[4897]: I0228 14:42:51.458845 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:42:51 crc kubenswrapper[4897]: E0228 14:42:51.459898 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:43:02 crc kubenswrapper[4897]: I0228 14:43:02.456859 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:43:02 crc kubenswrapper[4897]: E0228 14:43:02.457721 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:43:14 crc kubenswrapper[4897]: I0228 14:43:14.456700 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:43:14 crc kubenswrapper[4897]: E0228 14:43:14.457925 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:43:29 crc kubenswrapper[4897]: I0228 14:43:29.456818 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:43:29 crc kubenswrapper[4897]: E0228 14:43:29.457642 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:43:41 crc kubenswrapper[4897]: I0228 14:43:41.456420 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:43:41 crc kubenswrapper[4897]: E0228 14:43:41.457236 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:43:54 crc kubenswrapper[4897]: I0228 14:43:54.457272 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:43:54 crc kubenswrapper[4897]: E0228 14:43:54.458373 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.183492 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538164-dgr8s"] Feb 28 14:44:00 crc kubenswrapper[4897]: E0228 14:44:00.184635 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="extract-utilities" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.184655 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="extract-utilities" Feb 28 14:44:00 crc kubenswrapper[4897]: E0228 14:44:00.184683 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="extract-content" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.184693 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="extract-content" Feb 28 14:44:00 crc kubenswrapper[4897]: E0228 14:44:00.184718 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="registry-server" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.184729 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="registry-server" Feb 28 14:44:00 crc kubenswrapper[4897]: E0228 14:44:00.184767 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18345f62-4936-476c-85bf-2634b07217b7" containerName="oc" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.184778 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="18345f62-4936-476c-85bf-2634b07217b7" containerName="oc" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.185084 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="18345f62-4936-476c-85bf-2634b07217b7" containerName="oc" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.185130 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaba5c6-34b9-4255-9787-ae9213f11063" containerName="registry-server" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.186219 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538164-dgr8s" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.191733 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.192126 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.192396 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.203726 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538164-dgr8s"] Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.349239 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr5ft\" (UniqueName: \"kubernetes.io/projected/2415e6de-f719-40c7-a79f-fb39ce0872a1-kube-api-access-mr5ft\") pod \"auto-csr-approver-29538164-dgr8s\" (UID: \"2415e6de-f719-40c7-a79f-fb39ce0872a1\") " pod="openshift-infra/auto-csr-approver-29538164-dgr8s" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.451383 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr5ft\" (UniqueName: \"kubernetes.io/projected/2415e6de-f719-40c7-a79f-fb39ce0872a1-kube-api-access-mr5ft\") pod \"auto-csr-approver-29538164-dgr8s\" (UID: \"2415e6de-f719-40c7-a79f-fb39ce0872a1\") " pod="openshift-infra/auto-csr-approver-29538164-dgr8s" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.496601 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr5ft\" (UniqueName: \"kubernetes.io/projected/2415e6de-f719-40c7-a79f-fb39ce0872a1-kube-api-access-mr5ft\") pod \"auto-csr-approver-29538164-dgr8s\" (UID: \"2415e6de-f719-40c7-a79f-fb39ce0872a1\") " pod="openshift-infra/auto-csr-approver-29538164-dgr8s" Feb 28 14:44:00 crc kubenswrapper[4897]: I0228 14:44:00.521850 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538164-dgr8s" Feb 28 14:44:01 crc kubenswrapper[4897]: I0228 14:44:01.037037 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538164-dgr8s"] Feb 28 14:44:01 crc kubenswrapper[4897]: I0228 14:44:01.043957 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:44:02 crc kubenswrapper[4897]: I0228 14:44:02.011999 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538164-dgr8s" event={"ID":"2415e6de-f719-40c7-a79f-fb39ce0872a1","Type":"ContainerStarted","Data":"b3c06508f4ff247b2181f3b5bea851eec4299a8922358a1899d2c94b66187513"} Feb 28 14:44:03 crc kubenswrapper[4897]: I0228 14:44:03.024239 4897 generic.go:334] "Generic (PLEG): container finished" podID="2415e6de-f719-40c7-a79f-fb39ce0872a1" containerID="b56a7205790f26d1bbb93a619f0f64c94cfa81e7b6c4695c723e4ff275a866bb" exitCode=0 Feb 28 14:44:03 crc kubenswrapper[4897]: I0228 14:44:03.025957 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538164-dgr8s" event={"ID":"2415e6de-f719-40c7-a79f-fb39ce0872a1","Type":"ContainerDied","Data":"b56a7205790f26d1bbb93a619f0f64c94cfa81e7b6c4695c723e4ff275a866bb"} Feb 28 14:44:04 crc kubenswrapper[4897]: I0228 14:44:04.505585 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538164-dgr8s" Feb 28 14:44:04 crc kubenswrapper[4897]: I0228 14:44:04.644080 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr5ft\" (UniqueName: \"kubernetes.io/projected/2415e6de-f719-40c7-a79f-fb39ce0872a1-kube-api-access-mr5ft\") pod \"2415e6de-f719-40c7-a79f-fb39ce0872a1\" (UID: \"2415e6de-f719-40c7-a79f-fb39ce0872a1\") " Feb 28 14:44:04 crc kubenswrapper[4897]: I0228 14:44:04.650080 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2415e6de-f719-40c7-a79f-fb39ce0872a1-kube-api-access-mr5ft" (OuterVolumeSpecName: "kube-api-access-mr5ft") pod "2415e6de-f719-40c7-a79f-fb39ce0872a1" (UID: "2415e6de-f719-40c7-a79f-fb39ce0872a1"). InnerVolumeSpecName "kube-api-access-mr5ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:44:04 crc kubenswrapper[4897]: I0228 14:44:04.746164 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr5ft\" (UniqueName: \"kubernetes.io/projected/2415e6de-f719-40c7-a79f-fb39ce0872a1-kube-api-access-mr5ft\") on node \"crc\" DevicePath \"\"" Feb 28 14:44:05 crc kubenswrapper[4897]: I0228 14:44:05.054077 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538164-dgr8s" event={"ID":"2415e6de-f719-40c7-a79f-fb39ce0872a1","Type":"ContainerDied","Data":"b3c06508f4ff247b2181f3b5bea851eec4299a8922358a1899d2c94b66187513"} Feb 28 14:44:05 crc kubenswrapper[4897]: I0228 14:44:05.054144 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538164-dgr8s" Feb 28 14:44:05 crc kubenswrapper[4897]: I0228 14:44:05.054134 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3c06508f4ff247b2181f3b5bea851eec4299a8922358a1899d2c94b66187513" Feb 28 14:44:05 crc kubenswrapper[4897]: I0228 14:44:05.603609 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538158-vhhjs"] Feb 28 14:44:05 crc kubenswrapper[4897]: I0228 14:44:05.616260 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538158-vhhjs"] Feb 28 14:44:06 crc kubenswrapper[4897]: I0228 14:44:06.462462 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:44:06 crc kubenswrapper[4897]: E0228 14:44:06.462985 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:44:06 crc kubenswrapper[4897]: I0228 14:44:06.467337 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b716ea-4f21-47ff-a1c8-7b7796197ed6" path="/var/lib/kubelet/pods/d2b716ea-4f21-47ff-a1c8-7b7796197ed6/volumes" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.568410 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28dh9"] Feb 28 14:44:15 crc kubenswrapper[4897]: E0228 14:44:15.569543 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2415e6de-f719-40c7-a79f-fb39ce0872a1" containerName="oc" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.569559 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2415e6de-f719-40c7-a79f-fb39ce0872a1" containerName="oc" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.569850 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2415e6de-f719-40c7-a79f-fb39ce0872a1" containerName="oc" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.571613 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.591990 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28dh9"] Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.713505 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzvr4\" (UniqueName: \"kubernetes.io/projected/b01438e2-f554-46e2-b71e-2cbea05a0659-kube-api-access-gzvr4\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.713597 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-utilities\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.713625 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-catalog-content\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.815435 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzvr4\" (UniqueName: \"kubernetes.io/projected/b01438e2-f554-46e2-b71e-2cbea05a0659-kube-api-access-gzvr4\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.815564 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-utilities\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.815607 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-catalog-content\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.816059 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-utilities\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.816137 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-catalog-content\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.833466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzvr4\" (UniqueName: \"kubernetes.io/projected/b01438e2-f554-46e2-b71e-2cbea05a0659-kube-api-access-gzvr4\") pod \"redhat-operators-28dh9\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:15 crc kubenswrapper[4897]: I0228 14:44:15.893501 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:16 crc kubenswrapper[4897]: I0228 14:44:16.410398 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28dh9"] Feb 28 14:44:17 crc kubenswrapper[4897]: I0228 14:44:17.182669 4897 generic.go:334] "Generic (PLEG): container finished" podID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerID="f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a" exitCode=0 Feb 28 14:44:17 crc kubenswrapper[4897]: I0228 14:44:17.182839 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28dh9" event={"ID":"b01438e2-f554-46e2-b71e-2cbea05a0659","Type":"ContainerDied","Data":"f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a"} Feb 28 14:44:17 crc kubenswrapper[4897]: I0228 14:44:17.182931 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28dh9" event={"ID":"b01438e2-f554-46e2-b71e-2cbea05a0659","Type":"ContainerStarted","Data":"af2b3ab9562372fa954fb9d72b6ae6539c366ff651107e98b664c9279392f839"} Feb 28 14:44:17 crc kubenswrapper[4897]: E0228 14:44:17.846087 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 14:44:17 crc kubenswrapper[4897]: E0228 14:44:17.846951 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzvr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-28dh9_openshift-marketplace(b01438e2-f554-46e2-b71e-2cbea05a0659): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:44:17 crc kubenswrapper[4897]: E0228 14:44:17.849177 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-28dh9" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" Feb 28 14:44:18 crc kubenswrapper[4897]: E0228 14:44:18.193887 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-28dh9" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" Feb 28 14:44:20 crc kubenswrapper[4897]: I0228 14:44:20.456810 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:44:20 crc kubenswrapper[4897]: E0228 14:44:20.457792 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:44:33 crc kubenswrapper[4897]: I0228 14:44:33.353510 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28dh9" event={"ID":"b01438e2-f554-46e2-b71e-2cbea05a0659","Type":"ContainerStarted","Data":"929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c"} Feb 28 14:44:34 crc kubenswrapper[4897]: I0228 14:44:34.458569 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:44:34 crc kubenswrapper[4897]: E0228 14:44:34.459588 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:44:37 crc kubenswrapper[4897]: I0228 14:44:37.442924 4897 generic.go:334] "Generic (PLEG): container finished" podID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerID="929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c" exitCode=0 Feb 28 14:44:37 crc kubenswrapper[4897]: I0228 14:44:37.443000 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28dh9" event={"ID":"b01438e2-f554-46e2-b71e-2cbea05a0659","Type":"ContainerDied","Data":"929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c"} Feb 28 14:44:38 crc kubenswrapper[4897]: I0228 14:44:38.467335 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28dh9" event={"ID":"b01438e2-f554-46e2-b71e-2cbea05a0659","Type":"ContainerStarted","Data":"1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e"} Feb 28 14:44:38 crc kubenswrapper[4897]: I0228 14:44:38.503070 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28dh9" podStartSLOduration=2.831128286 podStartE2EDuration="23.503044149s" podCreationTimestamp="2026-02-28 14:44:15 +0000 UTC" firstStartedPulling="2026-02-28 14:44:17.186348285 +0000 UTC m=+5271.428668942" lastFinishedPulling="2026-02-28 14:44:37.858264108 +0000 UTC m=+5292.100584805" observedRunningTime="2026-02-28 14:44:38.490991349 +0000 UTC m=+5292.733312016" watchObservedRunningTime="2026-02-28 14:44:38.503044149 +0000 UTC m=+5292.745364856" Feb 28 14:44:45 crc kubenswrapper[4897]: I0228 14:44:45.456515 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:44:45 crc kubenswrapper[4897]: E0228 14:44:45.457560 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:44:45 crc kubenswrapper[4897]: I0228 14:44:45.895069 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:45 crc kubenswrapper[4897]: I0228 14:44:45.895280 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:44:46 crc kubenswrapper[4897]: I0228 14:44:46.957353 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-28dh9" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="registry-server" probeResult="failure" output=< Feb 28 14:44:46 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:44:46 crc kubenswrapper[4897]: > Feb 28 14:44:50 crc kubenswrapper[4897]: I0228 14:44:50.833031 4897 scope.go:117] "RemoveContainer" containerID="058d9a3c0f755a325b34752328bda2512c8a9c83d89bf8140a075599f0df06f8" Feb 28 14:44:56 crc kubenswrapper[4897]: I0228 14:44:56.963167 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-28dh9" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="registry-server" probeResult="failure" output=< Feb 28 14:44:56 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:44:56 crc kubenswrapper[4897]: > Feb 28 14:44:57 crc kubenswrapper[4897]: I0228 14:44:57.456256 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:44:57 crc kubenswrapper[4897]: E0228 14:44:57.456561 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.167303 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76"] Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.169162 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.171499 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.174418 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.188407 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76"] Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.350681 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdzpd\" (UniqueName: \"kubernetes.io/projected/bf14adaf-70ba-4f9f-bc50-443e1caae3be-kube-api-access-bdzpd\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.350832 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf14adaf-70ba-4f9f-bc50-443e1caae3be-secret-volume\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.350909 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf14adaf-70ba-4f9f-bc50-443e1caae3be-config-volume\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.452702 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf14adaf-70ba-4f9f-bc50-443e1caae3be-config-volume\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.452808 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdzpd\" (UniqueName: \"kubernetes.io/projected/bf14adaf-70ba-4f9f-bc50-443e1caae3be-kube-api-access-bdzpd\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.452917 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf14adaf-70ba-4f9f-bc50-443e1caae3be-secret-volume\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.454643 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf14adaf-70ba-4f9f-bc50-443e1caae3be-config-volume\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.474923 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf14adaf-70ba-4f9f-bc50-443e1caae3be-secret-volume\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.489453 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdzpd\" (UniqueName: \"kubernetes.io/projected/bf14adaf-70ba-4f9f-bc50-443e1caae3be-kube-api-access-bdzpd\") pod \"collect-profiles-29538165-29c76\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.494927 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:00 crc kubenswrapper[4897]: I0228 14:45:00.989995 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76"] Feb 28 14:45:01 crc kubenswrapper[4897]: I0228 14:45:01.743936 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" event={"ID":"bf14adaf-70ba-4f9f-bc50-443e1caae3be","Type":"ContainerStarted","Data":"a6f80e0f90260d02ece2035a118e5cc777c802f6434fe01e04cdbbf96ec10c06"} Feb 28 14:45:02 crc kubenswrapper[4897]: I0228 14:45:02.756845 4897 generic.go:334] "Generic (PLEG): container finished" podID="bf14adaf-70ba-4f9f-bc50-443e1caae3be" containerID="0d9fa385f39538abd8dcb3939e49a68441595688c22ebb840bb64d402dd1691d" exitCode=0 Feb 28 14:45:02 crc kubenswrapper[4897]: I0228 14:45:02.756924 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" event={"ID":"bf14adaf-70ba-4f9f-bc50-443e1caae3be","Type":"ContainerDied","Data":"0d9fa385f39538abd8dcb3939e49a68441595688c22ebb840bb64d402dd1691d"} Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.206873 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.336794 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdzpd\" (UniqueName: \"kubernetes.io/projected/bf14adaf-70ba-4f9f-bc50-443e1caae3be-kube-api-access-bdzpd\") pod \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.336935 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf14adaf-70ba-4f9f-bc50-443e1caae3be-config-volume\") pod \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.337197 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf14adaf-70ba-4f9f-bc50-443e1caae3be-secret-volume\") pod \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\" (UID: \"bf14adaf-70ba-4f9f-bc50-443e1caae3be\") " Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.337480 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf14adaf-70ba-4f9f-bc50-443e1caae3be-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf14adaf-70ba-4f9f-bc50-443e1caae3be" (UID: "bf14adaf-70ba-4f9f-bc50-443e1caae3be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.337739 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf14adaf-70ba-4f9f-bc50-443e1caae3be-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.349543 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf14adaf-70ba-4f9f-bc50-443e1caae3be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf14adaf-70ba-4f9f-bc50-443e1caae3be" (UID: "bf14adaf-70ba-4f9f-bc50-443e1caae3be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.349780 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf14adaf-70ba-4f9f-bc50-443e1caae3be-kube-api-access-bdzpd" (OuterVolumeSpecName: "kube-api-access-bdzpd") pod "bf14adaf-70ba-4f9f-bc50-443e1caae3be" (UID: "bf14adaf-70ba-4f9f-bc50-443e1caae3be"). InnerVolumeSpecName "kube-api-access-bdzpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.440288 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf14adaf-70ba-4f9f-bc50-443e1caae3be-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.440333 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdzpd\" (UniqueName: \"kubernetes.io/projected/bf14adaf-70ba-4f9f-bc50-443e1caae3be-kube-api-access-bdzpd\") on node \"crc\" DevicePath \"\"" Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.783137 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" event={"ID":"bf14adaf-70ba-4f9f-bc50-443e1caae3be","Type":"ContainerDied","Data":"a6f80e0f90260d02ece2035a118e5cc777c802f6434fe01e04cdbbf96ec10c06"} Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.783577 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6f80e0f90260d02ece2035a118e5cc777c802f6434fe01e04cdbbf96ec10c06" Feb 28 14:45:04 crc kubenswrapper[4897]: I0228 14:45:04.783225 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538165-29c76" Feb 28 14:45:05 crc kubenswrapper[4897]: I0228 14:45:05.315108 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9"] Feb 28 14:45:05 crc kubenswrapper[4897]: I0228 14:45:05.332801 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538120-qt7b9"] Feb 28 14:45:05 crc kubenswrapper[4897]: I0228 14:45:05.973709 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:45:06 crc kubenswrapper[4897]: I0228 14:45:06.057984 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:45:06 crc kubenswrapper[4897]: I0228 14:45:06.478192 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e" path="/var/lib/kubelet/pods/d1bf2f47-14db-45dc-ab7d-1ff5222f2e0e/volumes" Feb 28 14:45:07 crc kubenswrapper[4897]: I0228 14:45:07.153437 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28dh9"] Feb 28 14:45:07 crc kubenswrapper[4897]: I0228 14:45:07.816375 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-28dh9" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="registry-server" containerID="cri-o://1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e" gracePeriod=2 Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.349626 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.428620 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-catalog-content\") pod \"b01438e2-f554-46e2-b71e-2cbea05a0659\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.428748 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzvr4\" (UniqueName: \"kubernetes.io/projected/b01438e2-f554-46e2-b71e-2cbea05a0659-kube-api-access-gzvr4\") pod \"b01438e2-f554-46e2-b71e-2cbea05a0659\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.428790 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-utilities\") pod \"b01438e2-f554-46e2-b71e-2cbea05a0659\" (UID: \"b01438e2-f554-46e2-b71e-2cbea05a0659\") " Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.433176 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-utilities" (OuterVolumeSpecName: "utilities") pod "b01438e2-f554-46e2-b71e-2cbea05a0659" (UID: "b01438e2-f554-46e2-b71e-2cbea05a0659"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.445540 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01438e2-f554-46e2-b71e-2cbea05a0659-kube-api-access-gzvr4" (OuterVolumeSpecName: "kube-api-access-gzvr4") pod "b01438e2-f554-46e2-b71e-2cbea05a0659" (UID: "b01438e2-f554-46e2-b71e-2cbea05a0659"). InnerVolumeSpecName "kube-api-access-gzvr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.531167 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzvr4\" (UniqueName: \"kubernetes.io/projected/b01438e2-f554-46e2-b71e-2cbea05a0659-kube-api-access-gzvr4\") on node \"crc\" DevicePath \"\"" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.531199 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.552220 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b01438e2-f554-46e2-b71e-2cbea05a0659" (UID: "b01438e2-f554-46e2-b71e-2cbea05a0659"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.633424 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01438e2-f554-46e2-b71e-2cbea05a0659-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.831303 4897 generic.go:334] "Generic (PLEG): container finished" podID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerID="1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e" exitCode=0 Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.831419 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28dh9" event={"ID":"b01438e2-f554-46e2-b71e-2cbea05a0659","Type":"ContainerDied","Data":"1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e"} Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.831459 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28dh9" event={"ID":"b01438e2-f554-46e2-b71e-2cbea05a0659","Type":"ContainerDied","Data":"af2b3ab9562372fa954fb9d72b6ae6539c366ff651107e98b664c9279392f839"} Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.831489 4897 scope.go:117] "RemoveContainer" containerID="1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.831685 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28dh9" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.875072 4897 scope.go:117] "RemoveContainer" containerID="929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.880878 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28dh9"] Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.892008 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-28dh9"] Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.928957 4897 scope.go:117] "RemoveContainer" containerID="f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.995711 4897 scope.go:117] "RemoveContainer" containerID="1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e" Feb 28 14:45:08 crc kubenswrapper[4897]: E0228 14:45:08.996410 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e\": container with ID starting with 1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e not found: ID does not exist" containerID="1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.996437 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e"} err="failed to get container status \"1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e\": rpc error: code = NotFound desc = could not find container \"1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e\": container with ID starting with 1e2d9fbf114fcd845507a7c0c8cc631703c65342def8c2ab5416d6501d2ee75e not found: ID does not exist" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.996460 4897 scope.go:117] "RemoveContainer" containerID="929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c" Feb 28 14:45:08 crc kubenswrapper[4897]: E0228 14:45:08.996867 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c\": container with ID starting with 929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c not found: ID does not exist" containerID="929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.996882 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c"} err="failed to get container status \"929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c\": rpc error: code = NotFound desc = could not find container \"929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c\": container with ID starting with 929ad6af1333d99885f8609d51f3288f09163c6d645888e960d091c9bf0bd85c not found: ID does not exist" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.996894 4897 scope.go:117] "RemoveContainer" containerID="f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a" Feb 28 14:45:08 crc kubenswrapper[4897]: E0228 14:45:08.998061 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a\": container with ID starting with f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a not found: ID does not exist" containerID="f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a" Feb 28 14:45:08 crc kubenswrapper[4897]: I0228 14:45:08.998085 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a"} err="failed to get container status \"f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a\": rpc error: code = NotFound desc = could not find container \"f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a\": container with ID starting with f6ad2c9242f55da27585d7eccbb68d81ebc0a7da1ad6e67f4e5875719ad90e6a not found: ID does not exist" Feb 28 14:45:10 crc kubenswrapper[4897]: I0228 14:45:10.456452 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:45:10 crc kubenswrapper[4897]: E0228 14:45:10.457494 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:45:10 crc kubenswrapper[4897]: I0228 14:45:10.474611 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" path="/var/lib/kubelet/pods/b01438e2-f554-46e2-b71e-2cbea05a0659/volumes" Feb 28 14:45:21 crc kubenswrapper[4897]: I0228 14:45:21.457162 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:45:21 crc kubenswrapper[4897]: E0228 14:45:21.458098 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:45:28 crc kubenswrapper[4897]: E0228 14:45:28.392739 4897 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.164:37130->38.102.83.164:37321: write tcp 38.102.83.164:37130->38.102.83.164:37321: write: broken pipe Feb 28 14:45:32 crc kubenswrapper[4897]: I0228 14:45:32.458210 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:45:32 crc kubenswrapper[4897]: E0228 14:45:32.459367 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:45:43 crc kubenswrapper[4897]: I0228 14:45:43.456522 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:45:44 crc kubenswrapper[4897]: I0228 14:45:44.248446 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"9d4bb4972da82c5eea4e11899b0e2591e599978f5150be5eb00bc3577100eafb"} Feb 28 14:45:51 crc kubenswrapper[4897]: I0228 14:45:51.297097 4897 scope.go:117] "RemoveContainer" containerID="8899aaa614bf9310b9997084d8aa7e53586a3192b878e9e0385128c2ef7976a4" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.153306 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538166-qrfjl"] Feb 28 14:46:00 crc kubenswrapper[4897]: E0228 14:46:00.154604 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf14adaf-70ba-4f9f-bc50-443e1caae3be" containerName="collect-profiles" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.154626 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf14adaf-70ba-4f9f-bc50-443e1caae3be" containerName="collect-profiles" Feb 28 14:46:00 crc kubenswrapper[4897]: E0228 14:46:00.154655 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="extract-content" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.154669 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="extract-content" Feb 28 14:46:00 crc kubenswrapper[4897]: E0228 14:46:00.154708 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="registry-server" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.154722 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="registry-server" Feb 28 14:46:00 crc kubenswrapper[4897]: E0228 14:46:00.154758 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="extract-utilities" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.154771 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="extract-utilities" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.155870 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01438e2-f554-46e2-b71e-2cbea05a0659" containerName="registry-server" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.156060 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf14adaf-70ba-4f9f-bc50-443e1caae3be" containerName="collect-profiles" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.158451 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538166-qrfjl" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.162703 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.162936 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.163105 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.168862 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538166-qrfjl"] Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.318683 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8jd7\" (UniqueName: \"kubernetes.io/projected/397505ea-12f0-4055-8f18-72e80a5a6323-kube-api-access-k8jd7\") pod \"auto-csr-approver-29538166-qrfjl\" (UID: \"397505ea-12f0-4055-8f18-72e80a5a6323\") " pod="openshift-infra/auto-csr-approver-29538166-qrfjl" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.421262 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8jd7\" (UniqueName: \"kubernetes.io/projected/397505ea-12f0-4055-8f18-72e80a5a6323-kube-api-access-k8jd7\") pod \"auto-csr-approver-29538166-qrfjl\" (UID: \"397505ea-12f0-4055-8f18-72e80a5a6323\") " pod="openshift-infra/auto-csr-approver-29538166-qrfjl" Feb 28 14:46:00 crc kubenswrapper[4897]: I0228 14:46:00.900136 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8jd7\" (UniqueName: \"kubernetes.io/projected/397505ea-12f0-4055-8f18-72e80a5a6323-kube-api-access-k8jd7\") pod \"auto-csr-approver-29538166-qrfjl\" (UID: \"397505ea-12f0-4055-8f18-72e80a5a6323\") " pod="openshift-infra/auto-csr-approver-29538166-qrfjl" Feb 28 14:46:01 crc kubenswrapper[4897]: I0228 14:46:01.085606 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538166-qrfjl" Feb 28 14:46:01 crc kubenswrapper[4897]: I0228 14:46:01.483431 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538166-qrfjl"] Feb 28 14:46:02 crc kubenswrapper[4897]: I0228 14:46:02.496228 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538166-qrfjl" event={"ID":"397505ea-12f0-4055-8f18-72e80a5a6323","Type":"ContainerStarted","Data":"3eaa22868a28fd7ea2d27746d673d5a37758008cb8622a90d228a7b1a73eabf0"} Feb 28 14:46:03 crc kubenswrapper[4897]: I0228 14:46:03.512040 4897 generic.go:334] "Generic (PLEG): container finished" podID="397505ea-12f0-4055-8f18-72e80a5a6323" containerID="7947de86b70025cbb2904aced6be4d2a5bf0d10a0d9166b80556d51dd72736d8" exitCode=0 Feb 28 14:46:03 crc kubenswrapper[4897]: I0228 14:46:03.512142 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538166-qrfjl" event={"ID":"397505ea-12f0-4055-8f18-72e80a5a6323","Type":"ContainerDied","Data":"7947de86b70025cbb2904aced6be4d2a5bf0d10a0d9166b80556d51dd72736d8"} Feb 28 14:46:04 crc kubenswrapper[4897]: I0228 14:46:04.956829 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538166-qrfjl" Feb 28 14:46:04 crc kubenswrapper[4897]: I0228 14:46:04.975995 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8jd7\" (UniqueName: \"kubernetes.io/projected/397505ea-12f0-4055-8f18-72e80a5a6323-kube-api-access-k8jd7\") pod \"397505ea-12f0-4055-8f18-72e80a5a6323\" (UID: \"397505ea-12f0-4055-8f18-72e80a5a6323\") " Feb 28 14:46:04 crc kubenswrapper[4897]: I0228 14:46:04.984549 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397505ea-12f0-4055-8f18-72e80a5a6323-kube-api-access-k8jd7" (OuterVolumeSpecName: "kube-api-access-k8jd7") pod "397505ea-12f0-4055-8f18-72e80a5a6323" (UID: "397505ea-12f0-4055-8f18-72e80a5a6323"). InnerVolumeSpecName "kube-api-access-k8jd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:46:05 crc kubenswrapper[4897]: I0228 14:46:05.078683 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8jd7\" (UniqueName: \"kubernetes.io/projected/397505ea-12f0-4055-8f18-72e80a5a6323-kube-api-access-k8jd7\") on node \"crc\" DevicePath \"\"" Feb 28 14:46:05 crc kubenswrapper[4897]: I0228 14:46:05.539789 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538166-qrfjl" event={"ID":"397505ea-12f0-4055-8f18-72e80a5a6323","Type":"ContainerDied","Data":"3eaa22868a28fd7ea2d27746d673d5a37758008cb8622a90d228a7b1a73eabf0"} Feb 28 14:46:05 crc kubenswrapper[4897]: I0228 14:46:05.540285 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eaa22868a28fd7ea2d27746d673d5a37758008cb8622a90d228a7b1a73eabf0" Feb 28 14:46:05 crc kubenswrapper[4897]: I0228 14:46:05.539871 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538166-qrfjl" Feb 28 14:46:06 crc kubenswrapper[4897]: I0228 14:46:06.059674 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538160-mjvdr"] Feb 28 14:46:06 crc kubenswrapper[4897]: I0228 14:46:06.071887 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538160-mjvdr"] Feb 28 14:46:06 crc kubenswrapper[4897]: I0228 14:46:06.477156 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56142008-17ca-4caf-90f8-6588f0d2cec2" path="/var/lib/kubelet/pods/56142008-17ca-4caf-90f8-6588f0d2cec2/volumes" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.660598 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s5xgx"] Feb 28 14:46:25 crc kubenswrapper[4897]: E0228 14:46:25.662041 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397505ea-12f0-4055-8f18-72e80a5a6323" containerName="oc" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.662215 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="397505ea-12f0-4055-8f18-72e80a5a6323" containerName="oc" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.666884 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="397505ea-12f0-4055-8f18-72e80a5a6323" containerName="oc" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.674276 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.691555 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5xgx"] Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.708930 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzrmd\" (UniqueName: \"kubernetes.io/projected/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-kube-api-access-nzrmd\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.709152 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-catalog-content\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.709495 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-utilities\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.811665 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzrmd\" (UniqueName: \"kubernetes.io/projected/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-kube-api-access-nzrmd\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.811832 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-catalog-content\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.812037 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-utilities\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.812890 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-utilities\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.813597 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-catalog-content\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:25 crc kubenswrapper[4897]: I0228 14:46:25.845414 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzrmd\" (UniqueName: \"kubernetes.io/projected/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-kube-api-access-nzrmd\") pod \"community-operators-s5xgx\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:26 crc kubenswrapper[4897]: I0228 14:46:26.015451 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:46:26 crc kubenswrapper[4897]: I0228 14:46:26.544961 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5xgx"] Feb 28 14:46:26 crc kubenswrapper[4897]: I0228 14:46:26.810749 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5xgx" event={"ID":"4b54fc93-7c2b-4537-86cd-ca83765a1d9b","Type":"ContainerStarted","Data":"a217de198b7bc4539a7118fba0a2e0c8c26785e43c35eb382ba8070204b6d54a"} Feb 28 14:46:27 crc kubenswrapper[4897]: I0228 14:46:27.824888 4897 generic.go:334] "Generic (PLEG): container finished" podID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerID="2e876a78a2b736872a419cb3333c1d0970b237fd90c8e825888b9673b0b84e6a" exitCode=0 Feb 28 14:46:27 crc kubenswrapper[4897]: I0228 14:46:27.825003 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5xgx" event={"ID":"4b54fc93-7c2b-4537-86cd-ca83765a1d9b","Type":"ContainerDied","Data":"2e876a78a2b736872a419cb3333c1d0970b237fd90c8e825888b9673b0b84e6a"} Feb 28 14:46:28 crc kubenswrapper[4897]: E0228 14:46:28.322034 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:46:28 crc kubenswrapper[4897]: E0228 14:46:28.322425 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzrmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s5xgx_openshift-marketplace(4b54fc93-7c2b-4537-86cd-ca83765a1d9b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:46:28 crc kubenswrapper[4897]: E0228 14:46:28.323857 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-s5xgx" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" Feb 28 14:46:28 crc kubenswrapper[4897]: E0228 14:46:28.840404 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s5xgx" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" Feb 28 14:46:29 crc kubenswrapper[4897]: I0228 14:46:29.788565 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 14:46:29 crc kubenswrapper[4897]: I0228 14:46:29.792334 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 14:46:41 crc kubenswrapper[4897]: E0228 14:46:41.008245 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:46:41 crc kubenswrapper[4897]: E0228 14:46:41.009663 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzrmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s5xgx_openshift-marketplace(4b54fc93-7c2b-4537-86cd-ca83765a1d9b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:46:41 crc kubenswrapper[4897]: E0228 14:46:41.010918 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-s5xgx" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" Feb 28 14:46:51 crc kubenswrapper[4897]: I0228 14:46:51.420750 4897 scope.go:117] "RemoveContainer" containerID="2c99ced080513ff6c4b03807890473e00c8be7a3315f7a11fecbe39c33520fb6" Feb 28 14:46:53 crc kubenswrapper[4897]: E0228 14:46:53.489202 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s5xgx" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" Feb 28 14:47:05 crc kubenswrapper[4897]: E0228 14:47:05.051445 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:47:05 crc kubenswrapper[4897]: E0228 14:47:05.052234 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzrmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s5xgx_openshift-marketplace(4b54fc93-7c2b-4537-86cd-ca83765a1d9b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:47:05 crc kubenswrapper[4897]: E0228 14:47:05.053430 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-s5xgx" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" Feb 28 14:47:19 crc kubenswrapper[4897]: E0228 14:47:19.458780 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s5xgx" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" Feb 28 14:47:30 crc kubenswrapper[4897]: E0228 14:47:30.461639 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s5xgx" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" Feb 28 14:47:47 crc kubenswrapper[4897]: I0228 14:47:47.792437 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5xgx" event={"ID":"4b54fc93-7c2b-4537-86cd-ca83765a1d9b","Type":"ContainerStarted","Data":"81c1655ea3fbd6ef9443056e49d8da8190b1d967660c635f6b517a5a2dfb62d9"} Feb 28 14:47:48 crc kubenswrapper[4897]: I0228 14:47:48.806573 4897 generic.go:334] "Generic (PLEG): container finished" podID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerID="81c1655ea3fbd6ef9443056e49d8da8190b1d967660c635f6b517a5a2dfb62d9" exitCode=0 Feb 28 14:47:48 crc kubenswrapper[4897]: I0228 14:47:48.806714 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5xgx" event={"ID":"4b54fc93-7c2b-4537-86cd-ca83765a1d9b","Type":"ContainerDied","Data":"81c1655ea3fbd6ef9443056e49d8da8190b1d967660c635f6b517a5a2dfb62d9"} Feb 28 14:47:49 crc kubenswrapper[4897]: I0228 14:47:49.819596 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5xgx" event={"ID":"4b54fc93-7c2b-4537-86cd-ca83765a1d9b","Type":"ContainerStarted","Data":"9710def14961baa636de6fc3c0f973e69b6bfa8a8c5b9c5b350bb4c783c69d3b"} Feb 28 14:47:49 crc kubenswrapper[4897]: I0228 14:47:49.857354 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s5xgx" podStartSLOduration=3.476016944 podStartE2EDuration="1m24.857329153s" podCreationTimestamp="2026-02-28 14:46:25 +0000 UTC" firstStartedPulling="2026-02-28 14:46:27.827548257 +0000 UTC m=+5402.069868924" lastFinishedPulling="2026-02-28 14:47:49.208860476 +0000 UTC m=+5483.451181133" observedRunningTime="2026-02-28 14:47:49.840643071 +0000 UTC m=+5484.082963728" watchObservedRunningTime="2026-02-28 14:47:49.857329153 +0000 UTC m=+5484.099649850" Feb 28 14:47:56 crc kubenswrapper[4897]: I0228 14:47:56.016154 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:47:56 crc kubenswrapper[4897]: I0228 14:47:56.018050 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:47:56 crc kubenswrapper[4897]: I0228 14:47:56.085540 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:47:56 crc kubenswrapper[4897]: I0228 14:47:56.170139 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:47:56 crc kubenswrapper[4897]: I0228 14:47:56.901240 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5xgx"] Feb 28 14:47:58 crc kubenswrapper[4897]: I0228 14:47:58.081268 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s5xgx" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerName="registry-server" containerID="cri-o://9710def14961baa636de6fc3c0f973e69b6bfa8a8c5b9c5b350bb4c783c69d3b" gracePeriod=2 Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.097098 4897 generic.go:334] "Generic (PLEG): container finished" podID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerID="9710def14961baa636de6fc3c0f973e69b6bfa8a8c5b9c5b350bb4c783c69d3b" exitCode=0 Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.097445 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5xgx" event={"ID":"4b54fc93-7c2b-4537-86cd-ca83765a1d9b","Type":"ContainerDied","Data":"9710def14961baa636de6fc3c0f973e69b6bfa8a8c5b9c5b350bb4c783c69d3b"} Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.097537 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5xgx" event={"ID":"4b54fc93-7c2b-4537-86cd-ca83765a1d9b","Type":"ContainerDied","Data":"a217de198b7bc4539a7118fba0a2e0c8c26785e43c35eb382ba8070204b6d54a"} Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.097560 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a217de198b7bc4539a7118fba0a2e0c8c26785e43c35eb382ba8070204b6d54a" Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.183986 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.281862 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-catalog-content\") pod \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.281951 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzrmd\" (UniqueName: \"kubernetes.io/projected/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-kube-api-access-nzrmd\") pod \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.282064 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-utilities\") pod \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\" (UID: \"4b54fc93-7c2b-4537-86cd-ca83765a1d9b\") " Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.283075 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-utilities" (OuterVolumeSpecName: "utilities") pod "4b54fc93-7c2b-4537-86cd-ca83765a1d9b" (UID: "4b54fc93-7c2b-4537-86cd-ca83765a1d9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.293583 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-kube-api-access-nzrmd" (OuterVolumeSpecName: "kube-api-access-nzrmd") pod "4b54fc93-7c2b-4537-86cd-ca83765a1d9b" (UID: "4b54fc93-7c2b-4537-86cd-ca83765a1d9b"). InnerVolumeSpecName "kube-api-access-nzrmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.349083 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b54fc93-7c2b-4537-86cd-ca83765a1d9b" (UID: "4b54fc93-7c2b-4537-86cd-ca83765a1d9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.384017 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.384056 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzrmd\" (UniqueName: \"kubernetes.io/projected/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-kube-api-access-nzrmd\") on node \"crc\" DevicePath \"\"" Feb 28 14:47:59 crc kubenswrapper[4897]: I0228 14:47:59.384069 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b54fc93-7c2b-4537-86cd-ca83765a1d9b-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.116037 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5xgx" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.187406 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538168-tdcg6"] Feb 28 14:48:00 crc kubenswrapper[4897]: E0228 14:48:00.188216 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerName="extract-content" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.188247 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerName="extract-content" Feb 28 14:48:00 crc kubenswrapper[4897]: E0228 14:48:00.188278 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerName="registry-server" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.188296 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerName="registry-server" Feb 28 14:48:00 crc kubenswrapper[4897]: E0228 14:48:00.188375 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerName="extract-utilities" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.188395 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerName="extract-utilities" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.188981 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" containerName="registry-server" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.190219 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538168-tdcg6" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.192952 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.193215 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.193336 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.203085 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538168-tdcg6"] Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.213495 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5xgx"] Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.223931 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s5xgx"] Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.322883 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zzb\" (UniqueName: \"kubernetes.io/projected/f7fe4fbb-5283-4853-9032-b4dccf807d43-kube-api-access-d2zzb\") pod \"auto-csr-approver-29538168-tdcg6\" (UID: \"f7fe4fbb-5283-4853-9032-b4dccf807d43\") " pod="openshift-infra/auto-csr-approver-29538168-tdcg6" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.425496 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zzb\" (UniqueName: \"kubernetes.io/projected/f7fe4fbb-5283-4853-9032-b4dccf807d43-kube-api-access-d2zzb\") pod \"auto-csr-approver-29538168-tdcg6\" (UID: \"f7fe4fbb-5283-4853-9032-b4dccf807d43\") " pod="openshift-infra/auto-csr-approver-29538168-tdcg6" Feb 28 14:48:00 crc kubenswrapper[4897]: I0228 14:48:00.474478 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b54fc93-7c2b-4537-86cd-ca83765a1d9b" path="/var/lib/kubelet/pods/4b54fc93-7c2b-4537-86cd-ca83765a1d9b/volumes" Feb 28 14:48:01 crc kubenswrapper[4897]: I0228 14:48:01.108717 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zzb\" (UniqueName: \"kubernetes.io/projected/f7fe4fbb-5283-4853-9032-b4dccf807d43-kube-api-access-d2zzb\") pod \"auto-csr-approver-29538168-tdcg6\" (UID: \"f7fe4fbb-5283-4853-9032-b4dccf807d43\") " pod="openshift-infra/auto-csr-approver-29538168-tdcg6" Feb 28 14:48:01 crc kubenswrapper[4897]: I0228 14:48:01.133433 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538168-tdcg6" Feb 28 14:48:01 crc kubenswrapper[4897]: I0228 14:48:01.703630 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538168-tdcg6"] Feb 28 14:48:01 crc kubenswrapper[4897]: W0228 14:48:01.706694 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fe4fbb_5283_4853_9032_b4dccf807d43.slice/crio-e6e1ff42d8a64bc90aa288a34a3fd0ad070ccda55769f6c0ca4619c4258af749 WatchSource:0}: Error finding container e6e1ff42d8a64bc90aa288a34a3fd0ad070ccda55769f6c0ca4619c4258af749: Status 404 returned error can't find the container with id e6e1ff42d8a64bc90aa288a34a3fd0ad070ccda55769f6c0ca4619c4258af749 Feb 28 14:48:02 crc kubenswrapper[4897]: I0228 14:48:02.156210 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538168-tdcg6" event={"ID":"f7fe4fbb-5283-4853-9032-b4dccf807d43","Type":"ContainerStarted","Data":"e6e1ff42d8a64bc90aa288a34a3fd0ad070ccda55769f6c0ca4619c4258af749"} Feb 28 14:48:03 crc kubenswrapper[4897]: I0228 14:48:03.170762 4897 generic.go:334] "Generic (PLEG): container finished" podID="f7fe4fbb-5283-4853-9032-b4dccf807d43" containerID="70b2540e803675fc256d13f9ab8c158bf72b62e8087461675d1cd40ba239c2ed" exitCode=0 Feb 28 14:48:03 crc kubenswrapper[4897]: I0228 14:48:03.170818 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538168-tdcg6" event={"ID":"f7fe4fbb-5283-4853-9032-b4dccf807d43","Type":"ContainerDied","Data":"70b2540e803675fc256d13f9ab8c158bf72b62e8087461675d1cd40ba239c2ed"} Feb 28 14:48:03 crc kubenswrapper[4897]: I0228 14:48:03.371080 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:48:03 crc kubenswrapper[4897]: I0228 14:48:03.371166 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:48:04 crc kubenswrapper[4897]: I0228 14:48:04.628268 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538168-tdcg6" Feb 28 14:48:04 crc kubenswrapper[4897]: I0228 14:48:04.740778 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2zzb\" (UniqueName: \"kubernetes.io/projected/f7fe4fbb-5283-4853-9032-b4dccf807d43-kube-api-access-d2zzb\") pod \"f7fe4fbb-5283-4853-9032-b4dccf807d43\" (UID: \"f7fe4fbb-5283-4853-9032-b4dccf807d43\") " Feb 28 14:48:04 crc kubenswrapper[4897]: I0228 14:48:04.748540 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fe4fbb-5283-4853-9032-b4dccf807d43-kube-api-access-d2zzb" (OuterVolumeSpecName: "kube-api-access-d2zzb") pod "f7fe4fbb-5283-4853-9032-b4dccf807d43" (UID: "f7fe4fbb-5283-4853-9032-b4dccf807d43"). InnerVolumeSpecName "kube-api-access-d2zzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:48:04 crc kubenswrapper[4897]: I0228 14:48:04.843476 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2zzb\" (UniqueName: \"kubernetes.io/projected/f7fe4fbb-5283-4853-9032-b4dccf807d43-kube-api-access-d2zzb\") on node \"crc\" DevicePath \"\"" Feb 28 14:48:05 crc kubenswrapper[4897]: I0228 14:48:05.213814 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538168-tdcg6" event={"ID":"f7fe4fbb-5283-4853-9032-b4dccf807d43","Type":"ContainerDied","Data":"e6e1ff42d8a64bc90aa288a34a3fd0ad070ccda55769f6c0ca4619c4258af749"} Feb 28 14:48:05 crc kubenswrapper[4897]: I0228 14:48:05.213878 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e1ff42d8a64bc90aa288a34a3fd0ad070ccda55769f6c0ca4619c4258af749" Feb 28 14:48:05 crc kubenswrapper[4897]: I0228 14:48:05.213958 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538168-tdcg6" Feb 28 14:48:05 crc kubenswrapper[4897]: I0228 14:48:05.723916 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538162-kmcqc"] Feb 28 14:48:05 crc kubenswrapper[4897]: I0228 14:48:05.736841 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538162-kmcqc"] Feb 28 14:48:06 crc kubenswrapper[4897]: I0228 14:48:06.476126 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18345f62-4936-476c-85bf-2634b07217b7" path="/var/lib/kubelet/pods/18345f62-4936-476c-85bf-2634b07217b7/volumes" Feb 28 14:48:33 crc kubenswrapper[4897]: I0228 14:48:33.370372 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:48:33 crc kubenswrapper[4897]: I0228 14:48:33.370823 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.189003 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nj9z8"] Feb 28 14:48:41 crc kubenswrapper[4897]: E0228 14:48:41.190759 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe4fbb-5283-4853-9032-b4dccf807d43" containerName="oc" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.190792 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe4fbb-5283-4853-9032-b4dccf807d43" containerName="oc" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.191350 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe4fbb-5283-4853-9032-b4dccf807d43" containerName="oc" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.199252 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.202024 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nj9z8"] Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.226591 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-utilities\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.227003 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-catalog-content\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.227547 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr4qw\" (UniqueName: \"kubernetes.io/projected/4a425bda-3025-4151-8269-32f36f4ccf69-kube-api-access-pr4qw\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.329781 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr4qw\" (UniqueName: \"kubernetes.io/projected/4a425bda-3025-4151-8269-32f36f4ccf69-kube-api-access-pr4qw\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.329906 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-utilities\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.330009 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-catalog-content\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.330740 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-utilities\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.330832 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-catalog-content\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.361556 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr4qw\" (UniqueName: \"kubernetes.io/projected/4a425bda-3025-4151-8269-32f36f4ccf69-kube-api-access-pr4qw\") pod \"certified-operators-nj9z8\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:41 crc kubenswrapper[4897]: I0228 14:48:41.538699 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:42 crc kubenswrapper[4897]: I0228 14:48:42.055289 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nj9z8"] Feb 28 14:48:42 crc kubenswrapper[4897]: I0228 14:48:42.664861 4897 generic.go:334] "Generic (PLEG): container finished" podID="4a425bda-3025-4151-8269-32f36f4ccf69" containerID="55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e" exitCode=0 Feb 28 14:48:42 crc kubenswrapper[4897]: I0228 14:48:42.665071 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9z8" event={"ID":"4a425bda-3025-4151-8269-32f36f4ccf69","Type":"ContainerDied","Data":"55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e"} Feb 28 14:48:42 crc kubenswrapper[4897]: I0228 14:48:42.665193 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9z8" event={"ID":"4a425bda-3025-4151-8269-32f36f4ccf69","Type":"ContainerStarted","Data":"e0510dd897fffa1efbaf0632da5a35c3efadce6acd34ccfb5138c7f50ec53af1"} Feb 28 14:48:43 crc kubenswrapper[4897]: I0228 14:48:43.679002 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9z8" event={"ID":"4a425bda-3025-4151-8269-32f36f4ccf69","Type":"ContainerStarted","Data":"6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474"} Feb 28 14:48:45 crc kubenswrapper[4897]: I0228 14:48:45.704477 4897 generic.go:334] "Generic (PLEG): container finished" podID="4a425bda-3025-4151-8269-32f36f4ccf69" containerID="6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474" exitCode=0 Feb 28 14:48:45 crc kubenswrapper[4897]: I0228 14:48:45.704531 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9z8" event={"ID":"4a425bda-3025-4151-8269-32f36f4ccf69","Type":"ContainerDied","Data":"6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474"} Feb 28 14:48:46 crc kubenswrapper[4897]: I0228 14:48:46.713354 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9z8" event={"ID":"4a425bda-3025-4151-8269-32f36f4ccf69","Type":"ContainerStarted","Data":"81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191"} Feb 28 14:48:46 crc kubenswrapper[4897]: I0228 14:48:46.740878 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nj9z8" podStartSLOduration=2.253804284 podStartE2EDuration="5.740864626s" podCreationTimestamp="2026-02-28 14:48:41 +0000 UTC" firstStartedPulling="2026-02-28 14:48:42.667178556 +0000 UTC m=+5536.909499223" lastFinishedPulling="2026-02-28 14:48:46.154238898 +0000 UTC m=+5540.396559565" observedRunningTime="2026-02-28 14:48:46.733332663 +0000 UTC m=+5540.975653320" watchObservedRunningTime="2026-02-28 14:48:46.740864626 +0000 UTC m=+5540.983185283" Feb 28 14:48:51 crc kubenswrapper[4897]: I0228 14:48:51.541663 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:51 crc kubenswrapper[4897]: I0228 14:48:51.542473 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:51 crc kubenswrapper[4897]: I0228 14:48:51.553416 4897 scope.go:117] "RemoveContainer" containerID="99c0995b3c8219487b31033a12014b5e280b463ccb575a67247d20561f43212e" Feb 28 14:48:51 crc kubenswrapper[4897]: I0228 14:48:51.722568 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:51 crc kubenswrapper[4897]: I0228 14:48:51.829341 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:51 crc kubenswrapper[4897]: I0228 14:48:51.978965 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nj9z8"] Feb 28 14:48:53 crc kubenswrapper[4897]: I0228 14:48:53.789874 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nj9z8" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" containerName="registry-server" containerID="cri-o://81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191" gracePeriod=2 Feb 28 14:48:54 crc kubenswrapper[4897]: E0228 14:48:54.409720 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a425bda_3025_4151_8269_32f36f4ccf69.slice/crio-conmon-81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191.scope\": RecentStats: unable to find data in memory cache]" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.626661 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.782201 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr4qw\" (UniqueName: \"kubernetes.io/projected/4a425bda-3025-4151-8269-32f36f4ccf69-kube-api-access-pr4qw\") pod \"4a425bda-3025-4151-8269-32f36f4ccf69\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.782338 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-catalog-content\") pod \"4a425bda-3025-4151-8269-32f36f4ccf69\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.782588 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-utilities\") pod \"4a425bda-3025-4151-8269-32f36f4ccf69\" (UID: \"4a425bda-3025-4151-8269-32f36f4ccf69\") " Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.784215 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-utilities" (OuterVolumeSpecName: "utilities") pod "4a425bda-3025-4151-8269-32f36f4ccf69" (UID: "4a425bda-3025-4151-8269-32f36f4ccf69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.796924 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a425bda-3025-4151-8269-32f36f4ccf69-kube-api-access-pr4qw" (OuterVolumeSpecName: "kube-api-access-pr4qw") pod "4a425bda-3025-4151-8269-32f36f4ccf69" (UID: "4a425bda-3025-4151-8269-32f36f4ccf69"). InnerVolumeSpecName "kube-api-access-pr4qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.805259 4897 generic.go:334] "Generic (PLEG): container finished" podID="4a425bda-3025-4151-8269-32f36f4ccf69" containerID="81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191" exitCode=0 Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.805339 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9z8" event={"ID":"4a425bda-3025-4151-8269-32f36f4ccf69","Type":"ContainerDied","Data":"81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191"} Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.805379 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9z8" event={"ID":"4a425bda-3025-4151-8269-32f36f4ccf69","Type":"ContainerDied","Data":"e0510dd897fffa1efbaf0632da5a35c3efadce6acd34ccfb5138c7f50ec53af1"} Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.805406 4897 scope.go:117] "RemoveContainer" containerID="81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.805591 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nj9z8" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.883730 4897 scope.go:117] "RemoveContainer" containerID="6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.888433 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.888473 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr4qw\" (UniqueName: \"kubernetes.io/projected/4a425bda-3025-4151-8269-32f36f4ccf69-kube-api-access-pr4qw\") on node \"crc\" DevicePath \"\"" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.912672 4897 scope.go:117] "RemoveContainer" containerID="55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e" Feb 28 14:48:54 crc kubenswrapper[4897]: I0228 14:48:54.999854 4897 scope.go:117] "RemoveContainer" containerID="81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191" Feb 28 14:48:55 crc kubenswrapper[4897]: E0228 14:48:55.000886 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191\": container with ID starting with 81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191 not found: ID does not exist" containerID="81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191" Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.000957 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191"} err="failed to get container status \"81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191\": rpc error: code = NotFound desc = could not find container \"81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191\": container with ID starting with 81494f62e0c006b9b3e373c33a3091c7c99e2aedc887c8bf068934bd69521191 not found: ID does not exist" Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.000999 4897 scope.go:117] "RemoveContainer" containerID="6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474" Feb 28 14:48:55 crc kubenswrapper[4897]: E0228 14:48:55.002256 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474\": container with ID starting with 6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474 not found: ID does not exist" containerID="6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474" Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.002338 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474"} err="failed to get container status \"6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474\": rpc error: code = NotFound desc = could not find container \"6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474\": container with ID starting with 6367850af86e2513c2382cc5659ebfbde65e6a4123bd0e740e65986b805e9474 not found: ID does not exist" Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.002378 4897 scope.go:117] "RemoveContainer" containerID="55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e" Feb 28 14:48:55 crc kubenswrapper[4897]: E0228 14:48:55.003115 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e\": container with ID starting with 55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e not found: ID does not exist" containerID="55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e" Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.003164 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e"} err="failed to get container status \"55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e\": rpc error: code = NotFound desc = could not find container \"55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e\": container with ID starting with 55a509aec3a02431096963dff2f02895727fdbfed7fa84ff9b424ebb971a565e not found: ID does not exist" Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.057519 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a425bda-3025-4151-8269-32f36f4ccf69" (UID: "4a425bda-3025-4151-8269-32f36f4ccf69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.092362 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a425bda-3025-4151-8269-32f36f4ccf69-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.167543 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nj9z8"] Feb 28 14:48:55 crc kubenswrapper[4897]: I0228 14:48:55.181018 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nj9z8"] Feb 28 14:48:56 crc kubenswrapper[4897]: I0228 14:48:56.473146 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" path="/var/lib/kubelet/pods/4a425bda-3025-4151-8269-32f36f4ccf69/volumes" Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.370888 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.371534 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.371582 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.372355 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d4bb4972da82c5eea4e11899b0e2591e599978f5150be5eb00bc3577100eafb"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.372409 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://9d4bb4972da82c5eea4e11899b0e2591e599978f5150be5eb00bc3577100eafb" gracePeriod=600 Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.907552 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="9d4bb4972da82c5eea4e11899b0e2591e599978f5150be5eb00bc3577100eafb" exitCode=0 Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.907644 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"9d4bb4972da82c5eea4e11899b0e2591e599978f5150be5eb00bc3577100eafb"} Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.907919 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e"} Feb 28 14:49:03 crc kubenswrapper[4897]: I0228 14:49:03.907943 4897 scope.go:117] "RemoveContainer" containerID="3eec40c4cd69e7bd4645f3609c97b34d827169ce5d38c73cc72ba33c8af50e99" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.185798 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538170-r4mjg"] Feb 28 14:50:00 crc kubenswrapper[4897]: E0228 14:50:00.187723 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" containerName="extract-content" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.187759 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" containerName="extract-content" Feb 28 14:50:00 crc kubenswrapper[4897]: E0228 14:50:00.187829 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" containerName="registry-server" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.187847 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" containerName="registry-server" Feb 28 14:50:00 crc kubenswrapper[4897]: E0228 14:50:00.187926 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" containerName="extract-utilities" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.187946 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" containerName="extract-utilities" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.188441 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a425bda-3025-4151-8269-32f36f4ccf69" containerName="registry-server" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.191259 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.195651 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.196570 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.197475 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.202306 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538170-r4mjg"] Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.244192 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hdh\" (UniqueName: \"kubernetes.io/projected/191af18f-b904-42c5-b5c7-601a1cbdbebf-kube-api-access-k4hdh\") pod \"auto-csr-approver-29538170-r4mjg\" (UID: \"191af18f-b904-42c5-b5c7-601a1cbdbebf\") " pod="openshift-infra/auto-csr-approver-29538170-r4mjg" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.346537 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4hdh\" (UniqueName: \"kubernetes.io/projected/191af18f-b904-42c5-b5c7-601a1cbdbebf-kube-api-access-k4hdh\") pod \"auto-csr-approver-29538170-r4mjg\" (UID: \"191af18f-b904-42c5-b5c7-601a1cbdbebf\") " pod="openshift-infra/auto-csr-approver-29538170-r4mjg" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.363083 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4hdh\" (UniqueName: \"kubernetes.io/projected/191af18f-b904-42c5-b5c7-601a1cbdbebf-kube-api-access-k4hdh\") pod \"auto-csr-approver-29538170-r4mjg\" (UID: \"191af18f-b904-42c5-b5c7-601a1cbdbebf\") " pod="openshift-infra/auto-csr-approver-29538170-r4mjg" Feb 28 14:50:00 crc kubenswrapper[4897]: I0228 14:50:00.526004 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" Feb 28 14:50:01 crc kubenswrapper[4897]: I0228 14:50:01.005845 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538170-r4mjg"] Feb 28 14:50:01 crc kubenswrapper[4897]: I0228 14:50:01.012896 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:50:01 crc kubenswrapper[4897]: I0228 14:50:01.647562 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" event={"ID":"191af18f-b904-42c5-b5c7-601a1cbdbebf","Type":"ContainerStarted","Data":"e7cb5ad13c65f7373b3bb5540d94dcbc550b569ecfa8d8a327faabc630f931f8"} Feb 28 14:50:02 crc kubenswrapper[4897]: I0228 14:50:02.658208 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" event={"ID":"191af18f-b904-42c5-b5c7-601a1cbdbebf","Type":"ContainerStarted","Data":"314e8ae253181750329ef70cccd577ea25baf610d6401ffe5a076fee22ea987f"} Feb 28 14:50:02 crc kubenswrapper[4897]: I0228 14:50:02.681974 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" podStartSLOduration=1.571389531 podStartE2EDuration="2.681954704s" podCreationTimestamp="2026-02-28 14:50:00 +0000 UTC" firstStartedPulling="2026-02-28 14:50:01.012719613 +0000 UTC m=+5615.255040270" lastFinishedPulling="2026-02-28 14:50:02.123284756 +0000 UTC m=+5616.365605443" observedRunningTime="2026-02-28 14:50:02.674561965 +0000 UTC m=+5616.916882642" watchObservedRunningTime="2026-02-28 14:50:02.681954704 +0000 UTC m=+5616.924275371" Feb 28 14:50:03 crc kubenswrapper[4897]: I0228 14:50:03.671739 4897 generic.go:334] "Generic (PLEG): container finished" podID="191af18f-b904-42c5-b5c7-601a1cbdbebf" containerID="314e8ae253181750329ef70cccd577ea25baf610d6401ffe5a076fee22ea987f" exitCode=0 Feb 28 14:50:03 crc kubenswrapper[4897]: I0228 14:50:03.671787 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" event={"ID":"191af18f-b904-42c5-b5c7-601a1cbdbebf","Type":"ContainerDied","Data":"314e8ae253181750329ef70cccd577ea25baf610d6401ffe5a076fee22ea987f"} Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.034638 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.082212 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4hdh\" (UniqueName: \"kubernetes.io/projected/191af18f-b904-42c5-b5c7-601a1cbdbebf-kube-api-access-k4hdh\") pod \"191af18f-b904-42c5-b5c7-601a1cbdbebf\" (UID: \"191af18f-b904-42c5-b5c7-601a1cbdbebf\") " Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.091386 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191af18f-b904-42c5-b5c7-601a1cbdbebf-kube-api-access-k4hdh" (OuterVolumeSpecName: "kube-api-access-k4hdh") pod "191af18f-b904-42c5-b5c7-601a1cbdbebf" (UID: "191af18f-b904-42c5-b5c7-601a1cbdbebf"). InnerVolumeSpecName "kube-api-access-k4hdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.185594 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4hdh\" (UniqueName: \"kubernetes.io/projected/191af18f-b904-42c5-b5c7-601a1cbdbebf-kube-api-access-k4hdh\") on node \"crc\" DevicePath \"\"" Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.694929 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" event={"ID":"191af18f-b904-42c5-b5c7-601a1cbdbebf","Type":"ContainerDied","Data":"e7cb5ad13c65f7373b3bb5540d94dcbc550b569ecfa8d8a327faabc630f931f8"} Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.695355 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7cb5ad13c65f7373b3bb5540d94dcbc550b569ecfa8d8a327faabc630f931f8" Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.695432 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538170-r4mjg" Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.765258 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538164-dgr8s"] Feb 28 14:50:05 crc kubenswrapper[4897]: I0228 14:50:05.776140 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538164-dgr8s"] Feb 28 14:50:06 crc kubenswrapper[4897]: I0228 14:50:06.470782 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2415e6de-f719-40c7-a79f-fb39ce0872a1" path="/var/lib/kubelet/pods/2415e6de-f719-40c7-a79f-fb39ce0872a1/volumes" Feb 28 14:50:51 crc kubenswrapper[4897]: I0228 14:50:51.775476 4897 scope.go:117] "RemoveContainer" containerID="b56a7205790f26d1bbb93a619f0f64c94cfa81e7b6c4695c723e4ff275a866bb" Feb 28 14:51:03 crc kubenswrapper[4897]: I0228 14:51:03.370930 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:51:03 crc kubenswrapper[4897]: I0228 14:51:03.371384 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:51:33 crc kubenswrapper[4897]: I0228 14:51:33.370643 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:51:33 crc kubenswrapper[4897]: I0228 14:51:33.371377 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.425294 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c4tj5"] Feb 28 14:51:51 crc kubenswrapper[4897]: E0228 14:51:51.426241 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191af18f-b904-42c5-b5c7-601a1cbdbebf" containerName="oc" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.426255 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="191af18f-b904-42c5-b5c7-601a1cbdbebf" containerName="oc" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.426496 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="191af18f-b904-42c5-b5c7-601a1cbdbebf" containerName="oc" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.427897 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.465455 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4tj5"] Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.616367 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-catalog-content\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.616561 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w5nn\" (UniqueName: \"kubernetes.io/projected/a25b84bd-9659-4862-9f7f-3b0d9b738afc-kube-api-access-5w5nn\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.616653 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-utilities\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.718624 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w5nn\" (UniqueName: \"kubernetes.io/projected/a25b84bd-9659-4862-9f7f-3b0d9b738afc-kube-api-access-5w5nn\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.718695 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-utilities\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.718790 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-catalog-content\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.719430 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-utilities\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.719467 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-catalog-content\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.742650 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w5nn\" (UniqueName: \"kubernetes.io/projected/a25b84bd-9659-4862-9f7f-3b0d9b738afc-kube-api-access-5w5nn\") pod \"redhat-marketplace-c4tj5\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:51 crc kubenswrapper[4897]: I0228 14:51:51.763318 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:51:52 crc kubenswrapper[4897]: I0228 14:51:52.450382 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4tj5"] Feb 28 14:51:52 crc kubenswrapper[4897]: I0228 14:51:52.933573 4897 generic.go:334] "Generic (PLEG): container finished" podID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerID="e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076" exitCode=0 Feb 28 14:51:52 crc kubenswrapper[4897]: I0228 14:51:52.933642 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4tj5" event={"ID":"a25b84bd-9659-4862-9f7f-3b0d9b738afc","Type":"ContainerDied","Data":"e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076"} Feb 28 14:51:52 crc kubenswrapper[4897]: I0228 14:51:52.934020 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4tj5" event={"ID":"a25b84bd-9659-4862-9f7f-3b0d9b738afc","Type":"ContainerStarted","Data":"8654a30baedd4785018795d0022a4f6bbec169872feba8ce62932eaed12bf92b"} Feb 28 14:51:54 crc kubenswrapper[4897]: I0228 14:51:54.959869 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4tj5" event={"ID":"a25b84bd-9659-4862-9f7f-3b0d9b738afc","Type":"ContainerStarted","Data":"61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6"} Feb 28 14:51:55 crc kubenswrapper[4897]: I0228 14:51:55.976575 4897 generic.go:334] "Generic (PLEG): container finished" podID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerID="61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6" exitCode=0 Feb 28 14:51:55 crc kubenswrapper[4897]: I0228 14:51:55.976637 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4tj5" event={"ID":"a25b84bd-9659-4862-9f7f-3b0d9b738afc","Type":"ContainerDied","Data":"61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6"} Feb 28 14:51:56 crc kubenswrapper[4897]: I0228 14:51:56.992027 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4tj5" event={"ID":"a25b84bd-9659-4862-9f7f-3b0d9b738afc","Type":"ContainerStarted","Data":"ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c"} Feb 28 14:51:57 crc kubenswrapper[4897]: I0228 14:51:57.035934 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c4tj5" podStartSLOduration=2.539786159 podStartE2EDuration="6.035899127s" podCreationTimestamp="2026-02-28 14:51:51 +0000 UTC" firstStartedPulling="2026-02-28 14:51:52.936449578 +0000 UTC m=+5727.178770255" lastFinishedPulling="2026-02-28 14:51:56.432562536 +0000 UTC m=+5730.674883223" observedRunningTime="2026-02-28 14:51:57.014228014 +0000 UTC m=+5731.256548721" watchObservedRunningTime="2026-02-28 14:51:57.035899127 +0000 UTC m=+5731.278219814" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.170907 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538172-vlj6b"] Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.173296 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.175263 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.177524 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.177876 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.186780 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538172-vlj6b"] Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.352032 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmfj\" (UniqueName: \"kubernetes.io/projected/49c1940a-ce44-4706-bcff-213ac2986225-kube-api-access-rgmfj\") pod \"auto-csr-approver-29538172-vlj6b\" (UID: \"49c1940a-ce44-4706-bcff-213ac2986225\") " pod="openshift-infra/auto-csr-approver-29538172-vlj6b" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.454787 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgmfj\" (UniqueName: \"kubernetes.io/projected/49c1940a-ce44-4706-bcff-213ac2986225-kube-api-access-rgmfj\") pod \"auto-csr-approver-29538172-vlj6b\" (UID: \"49c1940a-ce44-4706-bcff-213ac2986225\") " pod="openshift-infra/auto-csr-approver-29538172-vlj6b" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.477288 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgmfj\" (UniqueName: \"kubernetes.io/projected/49c1940a-ce44-4706-bcff-213ac2986225-kube-api-access-rgmfj\") pod \"auto-csr-approver-29538172-vlj6b\" (UID: \"49c1940a-ce44-4706-bcff-213ac2986225\") " pod="openshift-infra/auto-csr-approver-29538172-vlj6b" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.494742 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" Feb 28 14:52:00 crc kubenswrapper[4897]: I0228 14:52:00.999390 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538172-vlj6b"] Feb 28 14:52:01 crc kubenswrapper[4897]: I0228 14:52:01.031256 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" event={"ID":"49c1940a-ce44-4706-bcff-213ac2986225","Type":"ContainerStarted","Data":"4b0fa721da2286dcc20fdea4eb9dd6f05b0198a1d269697cd818ee9d3c5912de"} Feb 28 14:52:01 crc kubenswrapper[4897]: I0228 14:52:01.763559 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:52:01 crc kubenswrapper[4897]: I0228 14:52:01.763874 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:52:01 crc kubenswrapper[4897]: I0228 14:52:01.838945 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:52:01 crc kubenswrapper[4897]: E0228 14:52:01.988331 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 14:52:01 crc kubenswrapper[4897]: E0228 14:52:01.988538 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 14:52:01 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 14:52:01 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rgmfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538172-vlj6b_openshift-infra(49c1940a-ce44-4706-bcff-213ac2986225): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 14:52:01 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 14:52:01 crc kubenswrapper[4897]: E0228 14:52:01.989803 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" podUID="49c1940a-ce44-4706-bcff-213ac2986225" Feb 28 14:52:02 crc kubenswrapper[4897]: E0228 14:52:02.048010 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" podUID="49c1940a-ce44-4706-bcff-213ac2986225" Feb 28 14:52:02 crc kubenswrapper[4897]: I0228 14:52:02.140559 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:52:03 crc kubenswrapper[4897]: I0228 14:52:03.370933 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:52:03 crc kubenswrapper[4897]: I0228 14:52:03.371317 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:52:03 crc kubenswrapper[4897]: I0228 14:52:03.371363 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 14:52:03 crc kubenswrapper[4897]: I0228 14:52:03.372139 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 14:52:03 crc kubenswrapper[4897]: I0228 14:52:03.372199 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" gracePeriod=600 Feb 28 14:52:03 crc kubenswrapper[4897]: E0228 14:52:03.500777 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:52:04 crc kubenswrapper[4897]: I0228 14:52:04.074220 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" exitCode=0 Feb 28 14:52:04 crc kubenswrapper[4897]: I0228 14:52:04.074284 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e"} Feb 28 14:52:04 crc kubenswrapper[4897]: I0228 14:52:04.074382 4897 scope.go:117] "RemoveContainer" containerID="9d4bb4972da82c5eea4e11899b0e2591e599978f5150be5eb00bc3577100eafb" Feb 28 14:52:04 crc kubenswrapper[4897]: I0228 14:52:04.075276 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:52:04 crc kubenswrapper[4897]: E0228 14:52:04.075794 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:52:05 crc kubenswrapper[4897]: I0228 14:52:05.414084 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4tj5"] Feb 28 14:52:05 crc kubenswrapper[4897]: I0228 14:52:05.414732 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c4tj5" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerName="registry-server" containerID="cri-o://ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c" gracePeriod=2 Feb 28 14:52:05 crc kubenswrapper[4897]: I0228 14:52:05.945350 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.093485 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-catalog-content\") pod \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.093724 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-utilities\") pod \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.093916 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w5nn\" (UniqueName: \"kubernetes.io/projected/a25b84bd-9659-4862-9f7f-3b0d9b738afc-kube-api-access-5w5nn\") pod \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\" (UID: \"a25b84bd-9659-4862-9f7f-3b0d9b738afc\") " Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.095685 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-utilities" (OuterVolumeSpecName: "utilities") pod "a25b84bd-9659-4862-9f7f-3b0d9b738afc" (UID: "a25b84bd-9659-4862-9f7f-3b0d9b738afc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.100304 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25b84bd-9659-4862-9f7f-3b0d9b738afc-kube-api-access-5w5nn" (OuterVolumeSpecName: "kube-api-access-5w5nn") pod "a25b84bd-9659-4862-9f7f-3b0d9b738afc" (UID: "a25b84bd-9659-4862-9f7f-3b0d9b738afc"). InnerVolumeSpecName "kube-api-access-5w5nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.106206 4897 generic.go:334] "Generic (PLEG): container finished" podID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerID="ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c" exitCode=0 Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.106244 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4tj5" event={"ID":"a25b84bd-9659-4862-9f7f-3b0d9b738afc","Type":"ContainerDied","Data":"ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c"} Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.106271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4tj5" event={"ID":"a25b84bd-9659-4862-9f7f-3b0d9b738afc","Type":"ContainerDied","Data":"8654a30baedd4785018795d0022a4f6bbec169872feba8ce62932eaed12bf92b"} Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.106288 4897 scope.go:117] "RemoveContainer" containerID="ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.106406 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c4tj5" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.136349 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25b84bd-9659-4862-9f7f-3b0d9b738afc" (UID: "a25b84bd-9659-4862-9f7f-3b0d9b738afc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.163177 4897 scope.go:117] "RemoveContainer" containerID="61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.196920 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w5nn\" (UniqueName: \"kubernetes.io/projected/a25b84bd-9659-4862-9f7f-3b0d9b738afc-kube-api-access-5w5nn\") on node \"crc\" DevicePath \"\"" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.196958 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.196971 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b84bd-9659-4862-9f7f-3b0d9b738afc-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.206109 4897 scope.go:117] "RemoveContainer" containerID="e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.236510 4897 scope.go:117] "RemoveContainer" containerID="ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c" Feb 28 14:52:06 crc kubenswrapper[4897]: E0228 14:52:06.238064 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c\": container with ID starting with ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c not found: ID does not exist" containerID="ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.238114 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c"} err="failed to get container status \"ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c\": rpc error: code = NotFound desc = could not find container \"ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c\": container with ID starting with ae44d79dfd1d9cb8c3df7499907778d49ee8c04f348b5e5d9b5c2efcc639d49c not found: ID does not exist" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.239061 4897 scope.go:117] "RemoveContainer" containerID="61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6" Feb 28 14:52:06 crc kubenswrapper[4897]: E0228 14:52:06.239797 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6\": container with ID starting with 61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6 not found: ID does not exist" containerID="61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.239871 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6"} err="failed to get container status \"61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6\": rpc error: code = NotFound desc = could not find container \"61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6\": container with ID starting with 61b37a160ffcd68e9a27687d40d09d27baeac2f6d574328fb70a9f74036329c6 not found: ID does not exist" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.239899 4897 scope.go:117] "RemoveContainer" containerID="e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076" Feb 28 14:52:06 crc kubenswrapper[4897]: E0228 14:52:06.240927 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076\": container with ID starting with e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076 not found: ID does not exist" containerID="e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.240959 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076"} err="failed to get container status \"e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076\": rpc error: code = NotFound desc = could not find container \"e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076\": container with ID starting with e9bc5d6007b830f790c8478078f7dbe08f74a86b6ec71f1b2aad3de9b8b87076 not found: ID does not exist" Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.446575 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4tj5"] Feb 28 14:52:06 crc kubenswrapper[4897]: I0228 14:52:06.489599 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4tj5"] Feb 28 14:52:08 crc kubenswrapper[4897]: I0228 14:52:08.481185 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" path="/var/lib/kubelet/pods/a25b84bd-9659-4862-9f7f-3b0d9b738afc/volumes" Feb 28 14:52:17 crc kubenswrapper[4897]: I0228 14:52:17.233210 4897 generic.go:334] "Generic (PLEG): container finished" podID="49c1940a-ce44-4706-bcff-213ac2986225" containerID="3b408f58627fc54ec009a9e35c89c2146358aee8d804310cab290d4183f91e93" exitCode=0 Feb 28 14:52:17 crc kubenswrapper[4897]: I0228 14:52:17.233327 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" event={"ID":"49c1940a-ce44-4706-bcff-213ac2986225","Type":"ContainerDied","Data":"3b408f58627fc54ec009a9e35c89c2146358aee8d804310cab290d4183f91e93"} Feb 28 14:52:17 crc kubenswrapper[4897]: I0228 14:52:17.457480 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:52:17 crc kubenswrapper[4897]: E0228 14:52:17.457747 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:52:18 crc kubenswrapper[4897]: I0228 14:52:18.674617 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" Feb 28 14:52:18 crc kubenswrapper[4897]: I0228 14:52:18.825391 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgmfj\" (UniqueName: \"kubernetes.io/projected/49c1940a-ce44-4706-bcff-213ac2986225-kube-api-access-rgmfj\") pod \"49c1940a-ce44-4706-bcff-213ac2986225\" (UID: \"49c1940a-ce44-4706-bcff-213ac2986225\") " Feb 28 14:52:18 crc kubenswrapper[4897]: I0228 14:52:18.835422 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c1940a-ce44-4706-bcff-213ac2986225-kube-api-access-rgmfj" (OuterVolumeSpecName: "kube-api-access-rgmfj") pod "49c1940a-ce44-4706-bcff-213ac2986225" (UID: "49c1940a-ce44-4706-bcff-213ac2986225"). InnerVolumeSpecName "kube-api-access-rgmfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:52:18 crc kubenswrapper[4897]: I0228 14:52:18.927907 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgmfj\" (UniqueName: \"kubernetes.io/projected/49c1940a-ce44-4706-bcff-213ac2986225-kube-api-access-rgmfj\") on node \"crc\" DevicePath \"\"" Feb 28 14:52:19 crc kubenswrapper[4897]: I0228 14:52:19.253744 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" event={"ID":"49c1940a-ce44-4706-bcff-213ac2986225","Type":"ContainerDied","Data":"4b0fa721da2286dcc20fdea4eb9dd6f05b0198a1d269697cd818ee9d3c5912de"} Feb 28 14:52:19 crc kubenswrapper[4897]: I0228 14:52:19.254348 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b0fa721da2286dcc20fdea4eb9dd6f05b0198a1d269697cd818ee9d3c5912de" Feb 28 14:52:19 crc kubenswrapper[4897]: I0228 14:52:19.253802 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538172-vlj6b" Feb 28 14:52:19 crc kubenswrapper[4897]: I0228 14:52:19.742978 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538166-qrfjl"] Feb 28 14:52:19 crc kubenswrapper[4897]: I0228 14:52:19.751109 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538166-qrfjl"] Feb 28 14:52:20 crc kubenswrapper[4897]: I0228 14:52:20.495199 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397505ea-12f0-4055-8f18-72e80a5a6323" path="/var/lib/kubelet/pods/397505ea-12f0-4055-8f18-72e80a5a6323/volumes" Feb 28 14:52:32 crc kubenswrapper[4897]: I0228 14:52:32.457433 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:52:32 crc kubenswrapper[4897]: E0228 14:52:32.458418 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:52:47 crc kubenswrapper[4897]: I0228 14:52:47.456912 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:52:47 crc kubenswrapper[4897]: E0228 14:52:47.458395 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:52:52 crc kubenswrapper[4897]: I0228 14:52:52.228051 4897 scope.go:117] "RemoveContainer" containerID="2e876a78a2b736872a419cb3333c1d0970b237fd90c8e825888b9673b0b84e6a" Feb 28 14:52:53 crc kubenswrapper[4897]: I0228 14:52:53.247205 4897 scope.go:117] "RemoveContainer" containerID="7947de86b70025cbb2904aced6be4d2a5bf0d10a0d9166b80556d51dd72736d8" Feb 28 14:53:01 crc kubenswrapper[4897]: I0228 14:53:01.456791 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:53:01 crc kubenswrapper[4897]: E0228 14:53:01.458001 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:53:16 crc kubenswrapper[4897]: I0228 14:53:16.470251 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:53:16 crc kubenswrapper[4897]: E0228 14:53:16.471499 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:53:30 crc kubenswrapper[4897]: I0228 14:53:30.456263 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:53:30 crc kubenswrapper[4897]: E0228 14:53:30.457089 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:53:30 crc kubenswrapper[4897]: I0228 14:53:30.483450 4897 generic.go:334] "Generic (PLEG): container finished" podID="49f3154b-02e1-4da4-a498-58e7280a8a64" containerID="4d12dbf6d72f4df26c5b26963b5aea69bfa544ba18bb59a6296a21341be84847" exitCode=0 Feb 28 14:53:30 crc kubenswrapper[4897]: I0228 14:53:30.483503 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"49f3154b-02e1-4da4-a498-58e7280a8a64","Type":"ContainerDied","Data":"4d12dbf6d72f4df26c5b26963b5aea69bfa544ba18bb59a6296a21341be84847"} Feb 28 14:53:31 crc kubenswrapper[4897]: I0228 14:53:31.941958 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140216 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t99ks\" (UniqueName: \"kubernetes.io/projected/49f3154b-02e1-4da4-a498-58e7280a8a64-kube-api-access-t99ks\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140388 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config-secret\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140470 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-workdir\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140503 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140569 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140605 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-temporary\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140649 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ca-certs\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140686 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-config-data\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.140845 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ssh-key\") pod \"49f3154b-02e1-4da4-a498-58e7280a8a64\" (UID: \"49f3154b-02e1-4da4-a498-58e7280a8a64\") " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.141059 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.141524 4897 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.141722 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-config-data" (OuterVolumeSpecName: "config-data") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.146614 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.155547 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.161249 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f3154b-02e1-4da4-a498-58e7280a8a64-kube-api-access-t99ks" (OuterVolumeSpecName: "kube-api-access-t99ks") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "kube-api-access-t99ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.174819 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.176609 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.191024 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.227444 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "49f3154b-02e1-4da4-a498-58e7280a8a64" (UID: "49f3154b-02e1-4da4-a498-58e7280a8a64"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.244730 4897 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/49f3154b-02e1-4da4-a498-58e7280a8a64-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.244848 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.244867 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.244878 4897 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.244905 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49f3154b-02e1-4da4-a498-58e7280a8a64-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.244916 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.244924 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t99ks\" (UniqueName: \"kubernetes.io/projected/49f3154b-02e1-4da4-a498-58e7280a8a64-kube-api-access-t99ks\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.244933 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49f3154b-02e1-4da4-a498-58e7280a8a64-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.270595 4897 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.346448 4897 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.512350 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"49f3154b-02e1-4da4-a498-58e7280a8a64","Type":"ContainerDied","Data":"9daf828feb63a9aebe4e3c35bb09466bbd1ecb566d0f634928759f33c6d872ed"} Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.512397 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9daf828feb63a9aebe4e3c35bb09466bbd1ecb566d0f634928759f33c6d872ed" Feb 28 14:53:32 crc kubenswrapper[4897]: I0228 14:53:32.512477 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.237229 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 14:53:40 crc kubenswrapper[4897]: E0228 14:53:40.238409 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c1940a-ce44-4706-bcff-213ac2986225" containerName="oc" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.238424 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c1940a-ce44-4706-bcff-213ac2986225" containerName="oc" Feb 28 14:53:40 crc kubenswrapper[4897]: E0228 14:53:40.238448 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerName="extract-content" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.238459 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerName="extract-content" Feb 28 14:53:40 crc kubenswrapper[4897]: E0228 14:53:40.238478 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerName="registry-server" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.238487 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerName="registry-server" Feb 28 14:53:40 crc kubenswrapper[4897]: E0228 14:53:40.238500 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerName="extract-utilities" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.238508 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerName="extract-utilities" Feb 28 14:53:40 crc kubenswrapper[4897]: E0228 14:53:40.238543 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f3154b-02e1-4da4-a498-58e7280a8a64" containerName="tempest-tests-tempest-tests-runner" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.238550 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f3154b-02e1-4da4-a498-58e7280a8a64" containerName="tempest-tests-tempest-tests-runner" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.238816 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f3154b-02e1-4da4-a498-58e7280a8a64" containerName="tempest-tests-tempest-tests-runner" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.238836 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c1940a-ce44-4706-bcff-213ac2986225" containerName="oc" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.238848 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25b84bd-9659-4862-9f7f-3b0d9b738afc" containerName="registry-server" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.239700 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.246929 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9dtkj" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.252889 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.436653 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b32d426-3313-4f78-9baa-92b8717b8d8e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.436738 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrxd\" (UniqueName: \"kubernetes.io/projected/9b32d426-3313-4f78-9baa-92b8717b8d8e-kube-api-access-hlrxd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b32d426-3313-4f78-9baa-92b8717b8d8e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.539120 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlrxd\" (UniqueName: \"kubernetes.io/projected/9b32d426-3313-4f78-9baa-92b8717b8d8e-kube-api-access-hlrxd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b32d426-3313-4f78-9baa-92b8717b8d8e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.539464 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b32d426-3313-4f78-9baa-92b8717b8d8e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.540145 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b32d426-3313-4f78-9baa-92b8717b8d8e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.563345 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlrxd\" (UniqueName: \"kubernetes.io/projected/9b32d426-3313-4f78-9baa-92b8717b8d8e-kube-api-access-hlrxd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b32d426-3313-4f78-9baa-92b8717b8d8e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.619716 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b32d426-3313-4f78-9baa-92b8717b8d8e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:40 crc kubenswrapper[4897]: I0228 14:53:40.880258 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 14:53:41 crc kubenswrapper[4897]: I0228 14:53:41.398108 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 14:53:41 crc kubenswrapper[4897]: I0228 14:53:41.611258 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9b32d426-3313-4f78-9baa-92b8717b8d8e","Type":"ContainerStarted","Data":"656db7af6300a207c39f31eee2f35c9f87d718c968d1371cb6c75b913d58314d"} Feb 28 14:53:42 crc kubenswrapper[4897]: I0228 14:53:42.624903 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9b32d426-3313-4f78-9baa-92b8717b8d8e","Type":"ContainerStarted","Data":"9abc4789c6aa8b4170d1c2fd9b4c994115968a56fa6591817c2926c9d472fdb6"} Feb 28 14:53:42 crc kubenswrapper[4897]: I0228 14:53:42.644986 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.84034115 podStartE2EDuration="2.644964232s" podCreationTimestamp="2026-02-28 14:53:40 +0000 UTC" firstStartedPulling="2026-02-28 14:53:41.409000054 +0000 UTC m=+5835.651320751" lastFinishedPulling="2026-02-28 14:53:42.213623166 +0000 UTC m=+5836.455943833" observedRunningTime="2026-02-28 14:53:42.640758353 +0000 UTC m=+5836.883079010" watchObservedRunningTime="2026-02-28 14:53:42.644964232 +0000 UTC m=+5836.887284919" Feb 28 14:53:45 crc kubenswrapper[4897]: I0228 14:53:45.457065 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:53:45 crc kubenswrapper[4897]: E0228 14:53:45.458147 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:53:53 crc kubenswrapper[4897]: I0228 14:53:53.375523 4897 scope.go:117] "RemoveContainer" containerID="9710def14961baa636de6fc3c0f973e69b6bfa8a8c5b9c5b350bb4c783c69d3b" Feb 28 14:53:53 crc kubenswrapper[4897]: I0228 14:53:53.412603 4897 scope.go:117] "RemoveContainer" containerID="81c1655ea3fbd6ef9443056e49d8da8190b1d967660c635f6b517a5a2dfb62d9" Feb 28 14:53:57 crc kubenswrapper[4897]: I0228 14:53:57.457664 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:53:57 crc kubenswrapper[4897]: E0228 14:53:57.458855 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.173736 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538174-4vxkv"] Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.176881 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538174-4vxkv" Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.180614 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.180716 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.191854 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.201210 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538174-4vxkv"] Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.317476 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgnf\" (UniqueName: \"kubernetes.io/projected/bdecfdb7-616d-4c3a-8758-4ef539cb2db5-kube-api-access-wbgnf\") pod \"auto-csr-approver-29538174-4vxkv\" (UID: \"bdecfdb7-616d-4c3a-8758-4ef539cb2db5\") " pod="openshift-infra/auto-csr-approver-29538174-4vxkv" Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.419561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgnf\" (UniqueName: \"kubernetes.io/projected/bdecfdb7-616d-4c3a-8758-4ef539cb2db5-kube-api-access-wbgnf\") pod \"auto-csr-approver-29538174-4vxkv\" (UID: \"bdecfdb7-616d-4c3a-8758-4ef539cb2db5\") " pod="openshift-infra/auto-csr-approver-29538174-4vxkv" Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.441597 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgnf\" (UniqueName: \"kubernetes.io/projected/bdecfdb7-616d-4c3a-8758-4ef539cb2db5-kube-api-access-wbgnf\") pod \"auto-csr-approver-29538174-4vxkv\" (UID: \"bdecfdb7-616d-4c3a-8758-4ef539cb2db5\") " pod="openshift-infra/auto-csr-approver-29538174-4vxkv" Feb 28 14:54:00 crc kubenswrapper[4897]: I0228 14:54:00.516449 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538174-4vxkv" Feb 28 14:54:01 crc kubenswrapper[4897]: I0228 14:54:01.032615 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538174-4vxkv"] Feb 28 14:54:01 crc kubenswrapper[4897]: W0228 14:54:01.038623 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdecfdb7_616d_4c3a_8758_4ef539cb2db5.slice/crio-235ed014de09760e526c03715968b6e53c773b40c874b65a410d388d4ed18801 WatchSource:0}: Error finding container 235ed014de09760e526c03715968b6e53c773b40c874b65a410d388d4ed18801: Status 404 returned error can't find the container with id 235ed014de09760e526c03715968b6e53c773b40c874b65a410d388d4ed18801 Feb 28 14:54:01 crc kubenswrapper[4897]: I0228 14:54:01.871095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538174-4vxkv" event={"ID":"bdecfdb7-616d-4c3a-8758-4ef539cb2db5","Type":"ContainerStarted","Data":"235ed014de09760e526c03715968b6e53c773b40c874b65a410d388d4ed18801"} Feb 28 14:54:02 crc kubenswrapper[4897]: I0228 14:54:02.882462 4897 generic.go:334] "Generic (PLEG): container finished" podID="bdecfdb7-616d-4c3a-8758-4ef539cb2db5" containerID="5f8d52450f822770b15df8239e1fc2f1f0969ce877a18c31cbd32d12a368ea09" exitCode=0 Feb 28 14:54:02 crc kubenswrapper[4897]: I0228 14:54:02.882782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538174-4vxkv" event={"ID":"bdecfdb7-616d-4c3a-8758-4ef539cb2db5","Type":"ContainerDied","Data":"5f8d52450f822770b15df8239e1fc2f1f0969ce877a18c31cbd32d12a368ea09"} Feb 28 14:54:04 crc kubenswrapper[4897]: I0228 14:54:04.391413 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538174-4vxkv" Feb 28 14:54:04 crc kubenswrapper[4897]: I0228 14:54:04.518476 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbgnf\" (UniqueName: \"kubernetes.io/projected/bdecfdb7-616d-4c3a-8758-4ef539cb2db5-kube-api-access-wbgnf\") pod \"bdecfdb7-616d-4c3a-8758-4ef539cb2db5\" (UID: \"bdecfdb7-616d-4c3a-8758-4ef539cb2db5\") " Feb 28 14:54:04 crc kubenswrapper[4897]: I0228 14:54:04.526097 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdecfdb7-616d-4c3a-8758-4ef539cb2db5-kube-api-access-wbgnf" (OuterVolumeSpecName: "kube-api-access-wbgnf") pod "bdecfdb7-616d-4c3a-8758-4ef539cb2db5" (UID: "bdecfdb7-616d-4c3a-8758-4ef539cb2db5"). InnerVolumeSpecName "kube-api-access-wbgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:54:04 crc kubenswrapper[4897]: I0228 14:54:04.621063 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbgnf\" (UniqueName: \"kubernetes.io/projected/bdecfdb7-616d-4c3a-8758-4ef539cb2db5-kube-api-access-wbgnf\") on node \"crc\" DevicePath \"\"" Feb 28 14:54:04 crc kubenswrapper[4897]: I0228 14:54:04.910145 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538174-4vxkv" event={"ID":"bdecfdb7-616d-4c3a-8758-4ef539cb2db5","Type":"ContainerDied","Data":"235ed014de09760e526c03715968b6e53c773b40c874b65a410d388d4ed18801"} Feb 28 14:54:04 crc kubenswrapper[4897]: I0228 14:54:04.910211 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235ed014de09760e526c03715968b6e53c773b40c874b65a410d388d4ed18801" Feb 28 14:54:04 crc kubenswrapper[4897]: I0228 14:54:04.910249 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538174-4vxkv" Feb 28 14:54:05 crc kubenswrapper[4897]: I0228 14:54:05.491873 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538168-tdcg6"] Feb 28 14:54:05 crc kubenswrapper[4897]: I0228 14:54:05.497635 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538168-tdcg6"] Feb 28 14:54:06 crc kubenswrapper[4897]: I0228 14:54:06.481601 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fe4fbb-5283-4853-9032-b4dccf807d43" path="/var/lib/kubelet/pods/f7fe4fbb-5283-4853-9032-b4dccf807d43/volumes" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.571886 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p6l9t/must-gather-gh5xr"] Feb 28 14:54:07 crc kubenswrapper[4897]: E0228 14:54:07.572553 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdecfdb7-616d-4c3a-8758-4ef539cb2db5" containerName="oc" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.572568 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdecfdb7-616d-4c3a-8758-4ef539cb2db5" containerName="oc" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.572792 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdecfdb7-616d-4c3a-8758-4ef539cb2db5" containerName="oc" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.573839 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.578624 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-p6l9t"/"default-dockercfg-gppdb" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.578737 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p6l9t"/"kube-root-ca.crt" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.578800 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p6l9t"/"openshift-service-ca.crt" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.591337 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p6l9t/must-gather-gh5xr"] Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.699820 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-must-gather-output\") pod \"must-gather-gh5xr\" (UID: \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\") " pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.699869 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsl6f\" (UniqueName: \"kubernetes.io/projected/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-kube-api-access-vsl6f\") pod \"must-gather-gh5xr\" (UID: \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\") " pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.801386 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-must-gather-output\") pod \"must-gather-gh5xr\" (UID: \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\") " pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.801655 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsl6f\" (UniqueName: \"kubernetes.io/projected/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-kube-api-access-vsl6f\") pod \"must-gather-gh5xr\" (UID: \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\") " pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.801847 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-must-gather-output\") pod \"must-gather-gh5xr\" (UID: \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\") " pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.823934 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsl6f\" (UniqueName: \"kubernetes.io/projected/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-kube-api-access-vsl6f\") pod \"must-gather-gh5xr\" (UID: \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\") " pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 14:54:07 crc kubenswrapper[4897]: I0228 14:54:07.895535 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 14:54:08 crc kubenswrapper[4897]: I0228 14:54:08.394508 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p6l9t/must-gather-gh5xr"] Feb 28 14:54:08 crc kubenswrapper[4897]: I0228 14:54:08.966371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" event={"ID":"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677","Type":"ContainerStarted","Data":"2745e6dfc3ab29e5da467b9230aef4c00c88d3d879bf3d13adc58f3585a06fb7"} Feb 28 14:54:10 crc kubenswrapper[4897]: I0228 14:54:10.458197 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:54:10 crc kubenswrapper[4897]: E0228 14:54:10.458707 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:54:16 crc kubenswrapper[4897]: I0228 14:54:16.046206 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" event={"ID":"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677","Type":"ContainerStarted","Data":"3ff63bfe4db0f14cc4370ac8a7aa82162a3bfca638ea2f26cce5547ec01d1a59"} Feb 28 14:54:16 crc kubenswrapper[4897]: I0228 14:54:16.046822 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" event={"ID":"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677","Type":"ContainerStarted","Data":"747af93356d4737fbde78d20ea726e0bc5e1960bc3e1ec3996ff9ed3d14d14a5"} Feb 28 14:54:16 crc kubenswrapper[4897]: I0228 14:54:16.071148 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" podStartSLOduration=2.5919248919999998 podStartE2EDuration="9.071123782s" podCreationTimestamp="2026-02-28 14:54:07 +0000 UTC" firstStartedPulling="2026-02-28 14:54:08.397564119 +0000 UTC m=+5862.639884786" lastFinishedPulling="2026-02-28 14:54:14.876763019 +0000 UTC m=+5869.119083676" observedRunningTime="2026-02-28 14:54:16.066798999 +0000 UTC m=+5870.309119696" watchObservedRunningTime="2026-02-28 14:54:16.071123782 +0000 UTC m=+5870.313444469" Feb 28 14:54:19 crc kubenswrapper[4897]: I0228 14:54:19.767514 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-xhs7w"] Feb 28 14:54:19 crc kubenswrapper[4897]: I0228 14:54:19.769698 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:54:19 crc kubenswrapper[4897]: I0228 14:54:19.884873 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/993c91d9-2a64-404f-99a2-cb1385b37924-host\") pod \"crc-debug-xhs7w\" (UID: \"993c91d9-2a64-404f-99a2-cb1385b37924\") " pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:54:19 crc kubenswrapper[4897]: I0228 14:54:19.885000 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfq8w\" (UniqueName: \"kubernetes.io/projected/993c91d9-2a64-404f-99a2-cb1385b37924-kube-api-access-qfq8w\") pod \"crc-debug-xhs7w\" (UID: \"993c91d9-2a64-404f-99a2-cb1385b37924\") " pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:54:19 crc kubenswrapper[4897]: I0228 14:54:19.986942 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfq8w\" (UniqueName: \"kubernetes.io/projected/993c91d9-2a64-404f-99a2-cb1385b37924-kube-api-access-qfq8w\") pod \"crc-debug-xhs7w\" (UID: \"993c91d9-2a64-404f-99a2-cb1385b37924\") " pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:54:19 crc kubenswrapper[4897]: I0228 14:54:19.987101 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/993c91d9-2a64-404f-99a2-cb1385b37924-host\") pod \"crc-debug-xhs7w\" (UID: \"993c91d9-2a64-404f-99a2-cb1385b37924\") " pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:54:19 crc kubenswrapper[4897]: I0228 14:54:19.987264 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/993c91d9-2a64-404f-99a2-cb1385b37924-host\") pod \"crc-debug-xhs7w\" (UID: \"993c91d9-2a64-404f-99a2-cb1385b37924\") " pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:54:20 crc kubenswrapper[4897]: I0228 14:54:20.018562 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfq8w\" (UniqueName: \"kubernetes.io/projected/993c91d9-2a64-404f-99a2-cb1385b37924-kube-api-access-qfq8w\") pod \"crc-debug-xhs7w\" (UID: \"993c91d9-2a64-404f-99a2-cb1385b37924\") " pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:54:20 crc kubenswrapper[4897]: I0228 14:54:20.107300 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:54:21 crc kubenswrapper[4897]: I0228 14:54:21.100938 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" event={"ID":"993c91d9-2a64-404f-99a2-cb1385b37924","Type":"ContainerStarted","Data":"624a8f9dc7211d05ca384f5fb0d535c7f8ff70662802996e01d60dad3479bb76"} Feb 28 14:54:21 crc kubenswrapper[4897]: I0228 14:54:21.456390 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:54:21 crc kubenswrapper[4897]: E0228 14:54:21.456714 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.784182 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hmdww"] Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.827919 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmdww"] Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.835701 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.869609 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-catalog-content\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.870303 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8c2b\" (UniqueName: \"kubernetes.io/projected/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-kube-api-access-j8c2b\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.870402 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-utilities\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.972145 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-catalog-content\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.972345 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8c2b\" (UniqueName: \"kubernetes.io/projected/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-kube-api-access-j8c2b\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.972392 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-utilities\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.972626 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-catalog-content\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:30 crc kubenswrapper[4897]: I0228 14:54:30.972752 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-utilities\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:31 crc kubenswrapper[4897]: I0228 14:54:31.008821 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8c2b\" (UniqueName: \"kubernetes.io/projected/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-kube-api-access-j8c2b\") pod \"redhat-operators-hmdww\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:31 crc kubenswrapper[4897]: I0228 14:54:31.162728 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:31 crc kubenswrapper[4897]: I0228 14:54:31.194580 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" event={"ID":"993c91d9-2a64-404f-99a2-cb1385b37924","Type":"ContainerStarted","Data":"720c886bba92d52c9aebfc4b8bb8323f4d21cb2d52c156b2d2551f20a85f6253"} Feb 28 14:54:31 crc kubenswrapper[4897]: I0228 14:54:31.239213 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" podStartSLOduration=1.740017573 podStartE2EDuration="12.239194195s" podCreationTimestamp="2026-02-28 14:54:19 +0000 UTC" firstStartedPulling="2026-02-28 14:54:20.198739237 +0000 UTC m=+5874.441059894" lastFinishedPulling="2026-02-28 14:54:30.697915859 +0000 UTC m=+5884.940236516" observedRunningTime="2026-02-28 14:54:31.222151513 +0000 UTC m=+5885.464472170" watchObservedRunningTime="2026-02-28 14:54:31.239194195 +0000 UTC m=+5885.481514852" Feb 28 14:54:31 crc kubenswrapper[4897]: I0228 14:54:31.670016 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmdww"] Feb 28 14:54:31 crc kubenswrapper[4897]: W0228 14:54:31.670021 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0f23b12_b669_4c2c_beb1_68f4c2aae0f6.slice/crio-6308f77e0785619468c2476d1989aa78daab3fc909efe3fc92d036e59c6d5144 WatchSource:0}: Error finding container 6308f77e0785619468c2476d1989aa78daab3fc909efe3fc92d036e59c6d5144: Status 404 returned error can't find the container with id 6308f77e0785619468c2476d1989aa78daab3fc909efe3fc92d036e59c6d5144 Feb 28 14:54:32 crc kubenswrapper[4897]: I0228 14:54:32.206542 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmdww" event={"ID":"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6","Type":"ContainerDied","Data":"e243b6e6aabb653e642eff63cb75e95d0de4fe0b394c24a4a575c847d956c3eb"} Feb 28 14:54:32 crc kubenswrapper[4897]: I0228 14:54:32.206498 4897 generic.go:334] "Generic (PLEG): container finished" podID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerID="e243b6e6aabb653e642eff63cb75e95d0de4fe0b394c24a4a575c847d956c3eb" exitCode=0 Feb 28 14:54:32 crc kubenswrapper[4897]: I0228 14:54:32.206991 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmdww" event={"ID":"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6","Type":"ContainerStarted","Data":"6308f77e0785619468c2476d1989aa78daab3fc909efe3fc92d036e59c6d5144"} Feb 28 14:54:33 crc kubenswrapper[4897]: I0228 14:54:33.225562 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmdww" event={"ID":"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6","Type":"ContainerStarted","Data":"b9c94e646dd09ff815a301ad72ad6cfaaaa17fbbcad986eb84b04acf6e96665e"} Feb 28 14:54:36 crc kubenswrapper[4897]: I0228 14:54:36.463346 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:54:36 crc kubenswrapper[4897]: E0228 14:54:36.464029 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:54:39 crc kubenswrapper[4897]: I0228 14:54:39.303580 4897 generic.go:334] "Generic (PLEG): container finished" podID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerID="b9c94e646dd09ff815a301ad72ad6cfaaaa17fbbcad986eb84b04acf6e96665e" exitCode=0 Feb 28 14:54:39 crc kubenswrapper[4897]: I0228 14:54:39.303635 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmdww" event={"ID":"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6","Type":"ContainerDied","Data":"b9c94e646dd09ff815a301ad72ad6cfaaaa17fbbcad986eb84b04acf6e96665e"} Feb 28 14:54:41 crc kubenswrapper[4897]: I0228 14:54:41.323542 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmdww" event={"ID":"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6","Type":"ContainerStarted","Data":"8b38bee7d8136b459becd4e81e987de0ccf42187479e2ef03343d3ef8d46e250"} Feb 28 14:54:49 crc kubenswrapper[4897]: I0228 14:54:49.456213 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:54:49 crc kubenswrapper[4897]: E0228 14:54:49.457083 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:54:51 crc kubenswrapper[4897]: I0228 14:54:51.163898 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:51 crc kubenswrapper[4897]: I0228 14:54:51.164384 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:51 crc kubenswrapper[4897]: I0228 14:54:51.216256 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:51 crc kubenswrapper[4897]: I0228 14:54:51.248188 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hmdww" podStartSLOduration=13.740132519 podStartE2EDuration="21.24817093s" podCreationTimestamp="2026-02-28 14:54:30 +0000 UTC" firstStartedPulling="2026-02-28 14:54:32.208902545 +0000 UTC m=+5886.451223202" lastFinishedPulling="2026-02-28 14:54:39.716940956 +0000 UTC m=+5893.959261613" observedRunningTime="2026-02-28 14:54:41.347698408 +0000 UTC m=+5895.590019065" watchObservedRunningTime="2026-02-28 14:54:51.24817093 +0000 UTC m=+5905.490491587" Feb 28 14:54:51 crc kubenswrapper[4897]: I0228 14:54:51.504724 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:51 crc kubenswrapper[4897]: I0228 14:54:51.572666 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmdww"] Feb 28 14:54:53 crc kubenswrapper[4897]: I0228 14:54:53.510560 4897 scope.go:117] "RemoveContainer" containerID="70b2540e803675fc256d13f9ab8c158bf72b62e8087461675d1cd40ba239c2ed" Feb 28 14:54:53 crc kubenswrapper[4897]: I0228 14:54:53.682074 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hmdww" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerName="registry-server" containerID="cri-o://8b38bee7d8136b459becd4e81e987de0ccf42187479e2ef03343d3ef8d46e250" gracePeriod=2 Feb 28 14:54:54 crc kubenswrapper[4897]: I0228 14:54:54.712639 4897 generic.go:334] "Generic (PLEG): container finished" podID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerID="8b38bee7d8136b459becd4e81e987de0ccf42187479e2ef03343d3ef8d46e250" exitCode=0 Feb 28 14:54:54 crc kubenswrapper[4897]: I0228 14:54:54.713073 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmdww" event={"ID":"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6","Type":"ContainerDied","Data":"8b38bee7d8136b459becd4e81e987de0ccf42187479e2ef03343d3ef8d46e250"} Feb 28 14:54:54 crc kubenswrapper[4897]: I0228 14:54:54.848629 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:54 crc kubenswrapper[4897]: I0228 14:54:54.971338 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-catalog-content\") pod \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " Feb 28 14:54:54 crc kubenswrapper[4897]: I0228 14:54:54.971654 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-utilities\") pod \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " Feb 28 14:54:54 crc kubenswrapper[4897]: I0228 14:54:54.971809 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8c2b\" (UniqueName: \"kubernetes.io/projected/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-kube-api-access-j8c2b\") pod \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\" (UID: \"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6\") " Feb 28 14:54:54 crc kubenswrapper[4897]: I0228 14:54:54.972420 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-utilities" (OuterVolumeSpecName: "utilities") pod "c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" (UID: "c0f23b12-b669-4c2c-beb1-68f4c2aae0f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:54:54 crc kubenswrapper[4897]: I0228 14:54:54.984963 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-kube-api-access-j8c2b" (OuterVolumeSpecName: "kube-api-access-j8c2b") pod "c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" (UID: "c0f23b12-b669-4c2c-beb1-68f4c2aae0f6"). InnerVolumeSpecName "kube-api-access-j8c2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.074142 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.074180 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8c2b\" (UniqueName: \"kubernetes.io/projected/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-kube-api-access-j8c2b\") on node \"crc\" DevicePath \"\"" Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.116387 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" (UID: "c0f23b12-b669-4c2c-beb1-68f4c2aae0f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.175868 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.724293 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmdww" event={"ID":"c0f23b12-b669-4c2c-beb1-68f4c2aae0f6","Type":"ContainerDied","Data":"6308f77e0785619468c2476d1989aa78daab3fc909efe3fc92d036e59c6d5144"} Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.724815 4897 scope.go:117] "RemoveContainer" containerID="8b38bee7d8136b459becd4e81e987de0ccf42187479e2ef03343d3ef8d46e250" Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.724687 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmdww" Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.767973 4897 scope.go:117] "RemoveContainer" containerID="b9c94e646dd09ff815a301ad72ad6cfaaaa17fbbcad986eb84b04acf6e96665e" Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.774229 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmdww"] Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.782570 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hmdww"] Feb 28 14:54:55 crc kubenswrapper[4897]: I0228 14:54:55.811096 4897 scope.go:117] "RemoveContainer" containerID="e243b6e6aabb653e642eff63cb75e95d0de4fe0b394c24a4a575c847d956c3eb" Feb 28 14:54:56 crc kubenswrapper[4897]: I0228 14:54:56.480607 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" path="/var/lib/kubelet/pods/c0f23b12-b669-4c2c-beb1-68f4c2aae0f6/volumes" Feb 28 14:55:02 crc kubenswrapper[4897]: I0228 14:55:02.456829 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:55:02 crc kubenswrapper[4897]: E0228 14:55:02.458806 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:55:14 crc kubenswrapper[4897]: I0228 14:55:14.458522 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:55:14 crc kubenswrapper[4897]: E0228 14:55:14.460122 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:55:23 crc kubenswrapper[4897]: I0228 14:55:23.053511 4897 generic.go:334] "Generic (PLEG): container finished" podID="993c91d9-2a64-404f-99a2-cb1385b37924" containerID="720c886bba92d52c9aebfc4b8bb8323f4d21cb2d52c156b2d2551f20a85f6253" exitCode=0 Feb 28 14:55:23 crc kubenswrapper[4897]: I0228 14:55:23.053633 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" event={"ID":"993c91d9-2a64-404f-99a2-cb1385b37924","Type":"ContainerDied","Data":"720c886bba92d52c9aebfc4b8bb8323f4d21cb2d52c156b2d2551f20a85f6253"} Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.186285 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.220159 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-xhs7w"] Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.227915 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-xhs7w"] Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.246617 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/993c91d9-2a64-404f-99a2-cb1385b37924-host\") pod \"993c91d9-2a64-404f-99a2-cb1385b37924\" (UID: \"993c91d9-2a64-404f-99a2-cb1385b37924\") " Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.246737 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/993c91d9-2a64-404f-99a2-cb1385b37924-host" (OuterVolumeSpecName: "host") pod "993c91d9-2a64-404f-99a2-cb1385b37924" (UID: "993c91d9-2a64-404f-99a2-cb1385b37924"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.246847 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfq8w\" (UniqueName: \"kubernetes.io/projected/993c91d9-2a64-404f-99a2-cb1385b37924-kube-api-access-qfq8w\") pod \"993c91d9-2a64-404f-99a2-cb1385b37924\" (UID: \"993c91d9-2a64-404f-99a2-cb1385b37924\") " Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.247340 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/993c91d9-2a64-404f-99a2-cb1385b37924-host\") on node \"crc\" DevicePath \"\"" Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.253637 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993c91d9-2a64-404f-99a2-cb1385b37924-kube-api-access-qfq8w" (OuterVolumeSpecName: "kube-api-access-qfq8w") pod "993c91d9-2a64-404f-99a2-cb1385b37924" (UID: "993c91d9-2a64-404f-99a2-cb1385b37924"). InnerVolumeSpecName "kube-api-access-qfq8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.348525 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfq8w\" (UniqueName: \"kubernetes.io/projected/993c91d9-2a64-404f-99a2-cb1385b37924-kube-api-access-qfq8w\") on node \"crc\" DevicePath \"\"" Feb 28 14:55:24 crc kubenswrapper[4897]: I0228 14:55:24.469683 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993c91d9-2a64-404f-99a2-cb1385b37924" path="/var/lib/kubelet/pods/993c91d9-2a64-404f-99a2-cb1385b37924/volumes" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.074360 4897 scope.go:117] "RemoveContainer" containerID="720c886bba92d52c9aebfc4b8bb8323f4d21cb2d52c156b2d2551f20a85f6253" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.074389 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-xhs7w" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.451714 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-rr7zv"] Feb 28 14:55:25 crc kubenswrapper[4897]: E0228 14:55:25.452783 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerName="extract-utilities" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.452809 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerName="extract-utilities" Feb 28 14:55:25 crc kubenswrapper[4897]: E0228 14:55:25.452855 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerName="registry-server" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.452867 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerName="registry-server" Feb 28 14:55:25 crc kubenswrapper[4897]: E0228 14:55:25.452911 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993c91d9-2a64-404f-99a2-cb1385b37924" containerName="container-00" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.452924 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="993c91d9-2a64-404f-99a2-cb1385b37924" containerName="container-00" Feb 28 14:55:25 crc kubenswrapper[4897]: E0228 14:55:25.452949 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerName="extract-content" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.452961 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerName="extract-content" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.453288 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="993c91d9-2a64-404f-99a2-cb1385b37924" containerName="container-00" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.453333 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0f23b12-b669-4c2c-beb1-68f4c2aae0f6" containerName="registry-server" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.454425 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.470804 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x58k\" (UniqueName: \"kubernetes.io/projected/a15f00a3-5168-4a9f-9327-187778e9faae-kube-api-access-8x58k\") pod \"crc-debug-rr7zv\" (UID: \"a15f00a3-5168-4a9f-9327-187778e9faae\") " pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.470853 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15f00a3-5168-4a9f-9327-187778e9faae-host\") pod \"crc-debug-rr7zv\" (UID: \"a15f00a3-5168-4a9f-9327-187778e9faae\") " pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.573294 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x58k\" (UniqueName: \"kubernetes.io/projected/a15f00a3-5168-4a9f-9327-187778e9faae-kube-api-access-8x58k\") pod \"crc-debug-rr7zv\" (UID: \"a15f00a3-5168-4a9f-9327-187778e9faae\") " pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.573379 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15f00a3-5168-4a9f-9327-187778e9faae-host\") pod \"crc-debug-rr7zv\" (UID: \"a15f00a3-5168-4a9f-9327-187778e9faae\") " pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.573516 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15f00a3-5168-4a9f-9327-187778e9faae-host\") pod \"crc-debug-rr7zv\" (UID: \"a15f00a3-5168-4a9f-9327-187778e9faae\") " pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.600243 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x58k\" (UniqueName: \"kubernetes.io/projected/a15f00a3-5168-4a9f-9327-187778e9faae-kube-api-access-8x58k\") pod \"crc-debug-rr7zv\" (UID: \"a15f00a3-5168-4a9f-9327-187778e9faae\") " pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:25 crc kubenswrapper[4897]: I0228 14:55:25.789951 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:25 crc kubenswrapper[4897]: W0228 14:55:25.844151 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15f00a3_5168_4a9f_9327_187778e9faae.slice/crio-9309d7684dce283bf33fd5595b0bf64eb2627fc199f8904cf5f8c37848bc4035 WatchSource:0}: Error finding container 9309d7684dce283bf33fd5595b0bf64eb2627fc199f8904cf5f8c37848bc4035: Status 404 returned error can't find the container with id 9309d7684dce283bf33fd5595b0bf64eb2627fc199f8904cf5f8c37848bc4035 Feb 28 14:55:26 crc kubenswrapper[4897]: I0228 14:55:26.098271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" event={"ID":"a15f00a3-5168-4a9f-9327-187778e9faae","Type":"ContainerStarted","Data":"9309d7684dce283bf33fd5595b0bf64eb2627fc199f8904cf5f8c37848bc4035"} Feb 28 14:55:27 crc kubenswrapper[4897]: I0228 14:55:27.123116 4897 generic.go:334] "Generic (PLEG): container finished" podID="a15f00a3-5168-4a9f-9327-187778e9faae" containerID="1e5d6bfe0fa72c614c7807d97e46018ec634424a23b377e055d4210cfd59e814" exitCode=0 Feb 28 14:55:27 crc kubenswrapper[4897]: I0228 14:55:27.123389 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" event={"ID":"a15f00a3-5168-4a9f-9327-187778e9faae","Type":"ContainerDied","Data":"1e5d6bfe0fa72c614c7807d97e46018ec634424a23b377e055d4210cfd59e814"} Feb 28 14:55:28 crc kubenswrapper[4897]: I0228 14:55:28.456358 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:55:28 crc kubenswrapper[4897]: E0228 14:55:28.456985 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:55:28 crc kubenswrapper[4897]: I0228 14:55:28.590916 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:28 crc kubenswrapper[4897]: I0228 14:55:28.748332 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x58k\" (UniqueName: \"kubernetes.io/projected/a15f00a3-5168-4a9f-9327-187778e9faae-kube-api-access-8x58k\") pod \"a15f00a3-5168-4a9f-9327-187778e9faae\" (UID: \"a15f00a3-5168-4a9f-9327-187778e9faae\") " Feb 28 14:55:28 crc kubenswrapper[4897]: I0228 14:55:28.748412 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15f00a3-5168-4a9f-9327-187778e9faae-host\") pod \"a15f00a3-5168-4a9f-9327-187778e9faae\" (UID: \"a15f00a3-5168-4a9f-9327-187778e9faae\") " Feb 28 14:55:28 crc kubenswrapper[4897]: I0228 14:55:28.748535 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a15f00a3-5168-4a9f-9327-187778e9faae-host" (OuterVolumeSpecName: "host") pod "a15f00a3-5168-4a9f-9327-187778e9faae" (UID: "a15f00a3-5168-4a9f-9327-187778e9faae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 14:55:28 crc kubenswrapper[4897]: I0228 14:55:28.749098 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15f00a3-5168-4a9f-9327-187778e9faae-host\") on node \"crc\" DevicePath \"\"" Feb 28 14:55:28 crc kubenswrapper[4897]: I0228 14:55:28.767002 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15f00a3-5168-4a9f-9327-187778e9faae-kube-api-access-8x58k" (OuterVolumeSpecName: "kube-api-access-8x58k") pod "a15f00a3-5168-4a9f-9327-187778e9faae" (UID: "a15f00a3-5168-4a9f-9327-187778e9faae"). InnerVolumeSpecName "kube-api-access-8x58k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:55:28 crc kubenswrapper[4897]: I0228 14:55:28.850590 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x58k\" (UniqueName: \"kubernetes.io/projected/a15f00a3-5168-4a9f-9327-187778e9faae-kube-api-access-8x58k\") on node \"crc\" DevicePath \"\"" Feb 28 14:55:29 crc kubenswrapper[4897]: I0228 14:55:29.140410 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" event={"ID":"a15f00a3-5168-4a9f-9327-187778e9faae","Type":"ContainerDied","Data":"9309d7684dce283bf33fd5595b0bf64eb2627fc199f8904cf5f8c37848bc4035"} Feb 28 14:55:29 crc kubenswrapper[4897]: I0228 14:55:29.140452 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9309d7684dce283bf33fd5595b0bf64eb2627fc199f8904cf5f8c37848bc4035" Feb 28 14:55:29 crc kubenswrapper[4897]: I0228 14:55:29.140473 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-rr7zv" Feb 28 14:55:29 crc kubenswrapper[4897]: I0228 14:55:29.653289 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-rr7zv"] Feb 28 14:55:29 crc kubenswrapper[4897]: I0228 14:55:29.661478 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-rr7zv"] Feb 28 14:55:30 crc kubenswrapper[4897]: I0228 14:55:30.477948 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15f00a3-5168-4a9f-9327-187778e9faae" path="/var/lib/kubelet/pods/a15f00a3-5168-4a9f-9327-187778e9faae/volumes" Feb 28 14:55:30 crc kubenswrapper[4897]: I0228 14:55:30.901384 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-jgfrb"] Feb 28 14:55:30 crc kubenswrapper[4897]: E0228 14:55:30.901877 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15f00a3-5168-4a9f-9327-187778e9faae" containerName="container-00" Feb 28 14:55:30 crc kubenswrapper[4897]: I0228 14:55:30.901893 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15f00a3-5168-4a9f-9327-187778e9faae" containerName="container-00" Feb 28 14:55:30 crc kubenswrapper[4897]: I0228 14:55:30.902138 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15f00a3-5168-4a9f-9327-187778e9faae" containerName="container-00" Feb 28 14:55:30 crc kubenswrapper[4897]: I0228 14:55:30.902953 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:31 crc kubenswrapper[4897]: I0228 14:55:31.000981 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxx42\" (UniqueName: \"kubernetes.io/projected/c7f65802-6607-424b-bad9-3177579c9162-kube-api-access-zxx42\") pod \"crc-debug-jgfrb\" (UID: \"c7f65802-6607-424b-bad9-3177579c9162\") " pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:31 crc kubenswrapper[4897]: I0228 14:55:31.001192 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7f65802-6607-424b-bad9-3177579c9162-host\") pod \"crc-debug-jgfrb\" (UID: \"c7f65802-6607-424b-bad9-3177579c9162\") " pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:31 crc kubenswrapper[4897]: I0228 14:55:31.102715 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxx42\" (UniqueName: \"kubernetes.io/projected/c7f65802-6607-424b-bad9-3177579c9162-kube-api-access-zxx42\") pod \"crc-debug-jgfrb\" (UID: \"c7f65802-6607-424b-bad9-3177579c9162\") " pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:31 crc kubenswrapper[4897]: I0228 14:55:31.102890 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7f65802-6607-424b-bad9-3177579c9162-host\") pod \"crc-debug-jgfrb\" (UID: \"c7f65802-6607-424b-bad9-3177579c9162\") " pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:31 crc kubenswrapper[4897]: I0228 14:55:31.103012 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7f65802-6607-424b-bad9-3177579c9162-host\") pod \"crc-debug-jgfrb\" (UID: \"c7f65802-6607-424b-bad9-3177579c9162\") " pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:31 crc kubenswrapper[4897]: I0228 14:55:31.132535 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxx42\" (UniqueName: \"kubernetes.io/projected/c7f65802-6607-424b-bad9-3177579c9162-kube-api-access-zxx42\") pod \"crc-debug-jgfrb\" (UID: \"c7f65802-6607-424b-bad9-3177579c9162\") " pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:31 crc kubenswrapper[4897]: I0228 14:55:31.233694 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:31 crc kubenswrapper[4897]: W0228 14:55:31.266580 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7f65802_6607_424b_bad9_3177579c9162.slice/crio-709ac886748f09defab460e2061bcf8115d2b008947464b9b174d47b6de4bf23 WatchSource:0}: Error finding container 709ac886748f09defab460e2061bcf8115d2b008947464b9b174d47b6de4bf23: Status 404 returned error can't find the container with id 709ac886748f09defab460e2061bcf8115d2b008947464b9b174d47b6de4bf23 Feb 28 14:55:32 crc kubenswrapper[4897]: I0228 14:55:32.182030 4897 generic.go:334] "Generic (PLEG): container finished" podID="c7f65802-6607-424b-bad9-3177579c9162" containerID="d24475b94bb5809d7e2826944a5641e9dd9eeacd87771aaa11e714722cd9b45b" exitCode=0 Feb 28 14:55:32 crc kubenswrapper[4897]: I0228 14:55:32.182129 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" event={"ID":"c7f65802-6607-424b-bad9-3177579c9162","Type":"ContainerDied","Data":"d24475b94bb5809d7e2826944a5641e9dd9eeacd87771aaa11e714722cd9b45b"} Feb 28 14:55:32 crc kubenswrapper[4897]: I0228 14:55:32.182471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" event={"ID":"c7f65802-6607-424b-bad9-3177579c9162","Type":"ContainerStarted","Data":"709ac886748f09defab460e2061bcf8115d2b008947464b9b174d47b6de4bf23"} Feb 28 14:55:32 crc kubenswrapper[4897]: I0228 14:55:32.237778 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-jgfrb"] Feb 28 14:55:32 crc kubenswrapper[4897]: I0228 14:55:32.248824 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p6l9t/crc-debug-jgfrb"] Feb 28 14:55:33 crc kubenswrapper[4897]: I0228 14:55:33.319795 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:33 crc kubenswrapper[4897]: I0228 14:55:33.456161 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxx42\" (UniqueName: \"kubernetes.io/projected/c7f65802-6607-424b-bad9-3177579c9162-kube-api-access-zxx42\") pod \"c7f65802-6607-424b-bad9-3177579c9162\" (UID: \"c7f65802-6607-424b-bad9-3177579c9162\") " Feb 28 14:55:33 crc kubenswrapper[4897]: I0228 14:55:33.456395 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7f65802-6607-424b-bad9-3177579c9162-host\") pod \"c7f65802-6607-424b-bad9-3177579c9162\" (UID: \"c7f65802-6607-424b-bad9-3177579c9162\") " Feb 28 14:55:33 crc kubenswrapper[4897]: I0228 14:55:33.456504 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7f65802-6607-424b-bad9-3177579c9162-host" (OuterVolumeSpecName: "host") pod "c7f65802-6607-424b-bad9-3177579c9162" (UID: "c7f65802-6607-424b-bad9-3177579c9162"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 14:55:33 crc kubenswrapper[4897]: I0228 14:55:33.456984 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7f65802-6607-424b-bad9-3177579c9162-host\") on node \"crc\" DevicePath \"\"" Feb 28 14:55:33 crc kubenswrapper[4897]: I0228 14:55:33.478332 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f65802-6607-424b-bad9-3177579c9162-kube-api-access-zxx42" (OuterVolumeSpecName: "kube-api-access-zxx42") pod "c7f65802-6607-424b-bad9-3177579c9162" (UID: "c7f65802-6607-424b-bad9-3177579c9162"). InnerVolumeSpecName "kube-api-access-zxx42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:55:33 crc kubenswrapper[4897]: I0228 14:55:33.558568 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxx42\" (UniqueName: \"kubernetes.io/projected/c7f65802-6607-424b-bad9-3177579c9162-kube-api-access-zxx42\") on node \"crc\" DevicePath \"\"" Feb 28 14:55:34 crc kubenswrapper[4897]: I0228 14:55:34.207382 4897 scope.go:117] "RemoveContainer" containerID="d24475b94bb5809d7e2826944a5641e9dd9eeacd87771aaa11e714722cd9b45b" Feb 28 14:55:34 crc kubenswrapper[4897]: I0228 14:55:34.207494 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/crc-debug-jgfrb" Feb 28 14:55:34 crc kubenswrapper[4897]: I0228 14:55:34.473339 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f65802-6607-424b-bad9-3177579c9162" path="/var/lib/kubelet/pods/c7f65802-6607-424b-bad9-3177579c9162/volumes" Feb 28 14:55:42 crc kubenswrapper[4897]: I0228 14:55:42.456948 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:55:42 crc kubenswrapper[4897]: E0228 14:55:42.457536 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:55:55 crc kubenswrapper[4897]: I0228 14:55:55.456085 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:55:55 crc kubenswrapper[4897]: E0228 14:55:55.457638 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.158224 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538176-c6b7s"] Feb 28 14:56:00 crc kubenswrapper[4897]: E0228 14:56:00.159743 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f65802-6607-424b-bad9-3177579c9162" containerName="container-00" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.159761 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f65802-6607-424b-bad9-3177579c9162" containerName="container-00" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.160030 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f65802-6607-424b-bad9-3177579c9162" containerName="container-00" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.161252 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.165297 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.170385 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.170423 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.198483 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsw7j\" (UniqueName: \"kubernetes.io/projected/614e76f9-7f4a-4075-bb88-e9b651ef700f-kube-api-access-xsw7j\") pod \"auto-csr-approver-29538176-c6b7s\" (UID: \"614e76f9-7f4a-4075-bb88-e9b651ef700f\") " pod="openshift-infra/auto-csr-approver-29538176-c6b7s" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.200558 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538176-c6b7s"] Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.301301 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsw7j\" (UniqueName: \"kubernetes.io/projected/614e76f9-7f4a-4075-bb88-e9b651ef700f-kube-api-access-xsw7j\") pod \"auto-csr-approver-29538176-c6b7s\" (UID: \"614e76f9-7f4a-4075-bb88-e9b651ef700f\") " pod="openshift-infra/auto-csr-approver-29538176-c6b7s" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.327498 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsw7j\" (UniqueName: \"kubernetes.io/projected/614e76f9-7f4a-4075-bb88-e9b651ef700f-kube-api-access-xsw7j\") pod \"auto-csr-approver-29538176-c6b7s\" (UID: \"614e76f9-7f4a-4075-bb88-e9b651ef700f\") " pod="openshift-infra/auto-csr-approver-29538176-c6b7s" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.505343 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.986518 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538176-c6b7s"] Feb 28 14:56:00 crc kubenswrapper[4897]: W0228 14:56:00.993995 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod614e76f9_7f4a_4075_bb88_e9b651ef700f.slice/crio-1aae3964c3efee95e8ea0c331532336f9ac2f6b8791f7c90fae8129d0f52ebdc WatchSource:0}: Error finding container 1aae3964c3efee95e8ea0c331532336f9ac2f6b8791f7c90fae8129d0f52ebdc: Status 404 returned error can't find the container with id 1aae3964c3efee95e8ea0c331532336f9ac2f6b8791f7c90fae8129d0f52ebdc Feb 28 14:56:00 crc kubenswrapper[4897]: I0228 14:56:00.997318 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 14:56:01 crc kubenswrapper[4897]: I0228 14:56:01.554240 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" event={"ID":"614e76f9-7f4a-4075-bb88-e9b651ef700f","Type":"ContainerStarted","Data":"1aae3964c3efee95e8ea0c331532336f9ac2f6b8791f7c90fae8129d0f52ebdc"} Feb 28 14:56:02 crc kubenswrapper[4897]: I0228 14:56:02.596065 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" event={"ID":"614e76f9-7f4a-4075-bb88-e9b651ef700f","Type":"ContainerStarted","Data":"6b472618cc7b552285c9370fd80e91df8f9719ae11f28b38f514db4b79b625ee"} Feb 28 14:56:02 crc kubenswrapper[4897]: I0228 14:56:02.611006 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" podStartSLOduration=1.700045798 podStartE2EDuration="2.610988557s" podCreationTimestamp="2026-02-28 14:56:00 +0000 UTC" firstStartedPulling="2026-02-28 14:56:00.997094622 +0000 UTC m=+5975.239415279" lastFinishedPulling="2026-02-28 14:56:01.908037361 +0000 UTC m=+5976.150358038" observedRunningTime="2026-02-28 14:56:02.610150263 +0000 UTC m=+5976.852470930" watchObservedRunningTime="2026-02-28 14:56:02.610988557 +0000 UTC m=+5976.853309224" Feb 28 14:56:03 crc kubenswrapper[4897]: I0228 14:56:03.626640 4897 generic.go:334] "Generic (PLEG): container finished" podID="614e76f9-7f4a-4075-bb88-e9b651ef700f" containerID="6b472618cc7b552285c9370fd80e91df8f9719ae11f28b38f514db4b79b625ee" exitCode=0 Feb 28 14:56:03 crc kubenswrapper[4897]: I0228 14:56:03.626761 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" event={"ID":"614e76f9-7f4a-4075-bb88-e9b651ef700f","Type":"ContainerDied","Data":"6b472618cc7b552285c9370fd80e91df8f9719ae11f28b38f514db4b79b625ee"} Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.038570 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.208930 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsw7j\" (UniqueName: \"kubernetes.io/projected/614e76f9-7f4a-4075-bb88-e9b651ef700f-kube-api-access-xsw7j\") pod \"614e76f9-7f4a-4075-bb88-e9b651ef700f\" (UID: \"614e76f9-7f4a-4075-bb88-e9b651ef700f\") " Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.221224 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614e76f9-7f4a-4075-bb88-e9b651ef700f-kube-api-access-xsw7j" (OuterVolumeSpecName: "kube-api-access-xsw7j") pod "614e76f9-7f4a-4075-bb88-e9b651ef700f" (UID: "614e76f9-7f4a-4075-bb88-e9b651ef700f"). InnerVolumeSpecName "kube-api-access-xsw7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.311613 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsw7j\" (UniqueName: \"kubernetes.io/projected/614e76f9-7f4a-4075-bb88-e9b651ef700f-kube-api-access-xsw7j\") on node \"crc\" DevicePath \"\"" Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.660626 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" event={"ID":"614e76f9-7f4a-4075-bb88-e9b651ef700f","Type":"ContainerDied","Data":"1aae3964c3efee95e8ea0c331532336f9ac2f6b8791f7c90fae8129d0f52ebdc"} Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.660998 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aae3964c3efee95e8ea0c331532336f9ac2f6b8791f7c90fae8129d0f52ebdc" Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.660687 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538176-c6b7s" Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.701420 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538170-r4mjg"] Feb 28 14:56:05 crc kubenswrapper[4897]: I0228 14:56:05.711664 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538170-r4mjg"] Feb 28 14:56:06 crc kubenswrapper[4897]: I0228 14:56:06.492283 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191af18f-b904-42c5-b5c7-601a1cbdbebf" path="/var/lib/kubelet/pods/191af18f-b904-42c5-b5c7-601a1cbdbebf/volumes" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.205617 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cc5d7cb8-nws5v_d2375f60-8d95-4855-ace5-ecbfadb87114/barbican-api/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.235104 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cc5d7cb8-nws5v_d2375f60-8d95-4855-ace5-ecbfadb87114/barbican-api-log/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.395592 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7566789bf4-gcgqv_ae3d152c-8c19-456d-82a4-184138ae3541/barbican-keystone-listener/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.455909 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:56:07 crc kubenswrapper[4897]: E0228 14:56:07.456165 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.489150 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-755f78ff99-pb5jr_8315bc28-3362-4d67-9561-f2b8fa3e69b7/barbican-worker/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.492130 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7566789bf4-gcgqv_ae3d152c-8c19-456d-82a4-184138ae3541/barbican-keystone-listener-log/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.593329 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-755f78ff99-pb5jr_8315bc28-3362-4d67-9561-f2b8fa3e69b7/barbican-worker-log/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.713370 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb_efd25e11-574a-4504-94fc-509e4f367939/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.938504 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_49ad0c65-4304-477c-8cfa-c344fcf2ab9b/sg-core/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.939362 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_49ad0c65-4304-477c-8cfa-c344fcf2ab9b/ceilometer-notification-agent/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.952708 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_49ad0c65-4304-477c-8cfa-c344fcf2ab9b/proxy-httpd/0.log" Feb 28 14:56:07 crc kubenswrapper[4897]: I0228 14:56:07.962488 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_49ad0c65-4304-477c-8cfa-c344fcf2ab9b/ceilometer-central-agent/0.log" Feb 28 14:56:08 crc kubenswrapper[4897]: I0228 14:56:08.180671 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_500bdde3-9ae3-4829-8cee-5e85a7c218a9/cinder-api-log/0.log" Feb 28 14:56:08 crc kubenswrapper[4897]: I0228 14:56:08.486852 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_35d2e345-c465-43d1-a9e2-0592960bc377/probe/0.log" Feb 28 14:56:08 crc kubenswrapper[4897]: I0228 14:56:08.644884 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_500bdde3-9ae3-4829-8cee-5e85a7c218a9/cinder-api/0.log" Feb 28 14:56:08 crc kubenswrapper[4897]: I0228 14:56:08.665752 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b0bef6c5-aed5-464c-8518-9be02ba3cb86/cinder-scheduler/0.log" Feb 28 14:56:08 crc kubenswrapper[4897]: I0228 14:56:08.673919 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_35d2e345-c465-43d1-a9e2-0592960bc377/cinder-backup/0.log" Feb 28 14:56:08 crc kubenswrapper[4897]: I0228 14:56:08.727667 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b0bef6c5-aed5-464c-8518-9be02ba3cb86/probe/0.log" Feb 28 14:56:08 crc kubenswrapper[4897]: I0228 14:56:08.902790 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_622d265c-1cb2-47ac-b31e-5d226545d4de/probe/0.log" Feb 28 14:56:08 crc kubenswrapper[4897]: I0228 14:56:08.952414 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_622d265c-1cb2-47ac-b31e-5d226545d4de/cinder-volume/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.133786 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_6236a51d-66cb-4285-bc2b-767cf39c989a/probe/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.199749 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_6236a51d-66cb-4285-bc2b-767cf39c989a/cinder-volume/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.245872 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-9245w_474a32f3-7317-40c6-80cb-6e36415a2d5d/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.379699 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xss7j_f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.451478 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fbb6cc7-qchzg_9045e426-bdc0-4327-8c53-1f3e64d1e3a2/init/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.668770 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fbb6cc7-qchzg_9045e426-bdc0-4327-8c53-1f3e64d1e3a2/init/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.700121 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-x2blf_9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.865731 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fbb6cc7-qchzg_9045e426-bdc0-4327-8c53-1f3e64d1e3a2/dnsmasq-dns/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.912844 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_891bad69-3c9e-4c8a-b5fb-526b4ce79ec5/glance-httpd/0.log" Feb 28 14:56:09 crc kubenswrapper[4897]: I0228 14:56:09.943008 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_891bad69-3c9e-4c8a-b5fb-526b4ce79ec5/glance-log/0.log" Feb 28 14:56:10 crc kubenswrapper[4897]: I0228 14:56:10.127157 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5c9c2403-d54a-4278-b29c-e0533e360579/glance-httpd/0.log" Feb 28 14:56:10 crc kubenswrapper[4897]: I0228 14:56:10.153598 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5c9c2403-d54a-4278-b29c-e0533e360579/glance-log/0.log" Feb 28 14:56:10 crc kubenswrapper[4897]: I0228 14:56:10.368903 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df779db98-ljwk8_e0db6a4f-19e4-488c-bc45-9619565bdf57/horizon/0.log" Feb 28 14:56:10 crc kubenswrapper[4897]: I0228 14:56:10.398566 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz_fdc8cc43-763f-4d3e-8630-a811a93a4157/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:10 crc kubenswrapper[4897]: I0228 14:56:10.619864 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29538121-psnhm_24ea6562-040d-4eb4-865b-692acf8b2a46/keystone-cron/0.log" Feb 28 14:56:10 crc kubenswrapper[4897]: I0228 14:56:10.642971 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-d4ng9_8651da53-e976-4395-964b-a5c077d64a26/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:10 crc kubenswrapper[4897]: I0228 14:56:10.834305 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf/kube-state-metrics/0.log" Feb 28 14:56:11 crc kubenswrapper[4897]: I0228 14:56:11.004002 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df779db98-ljwk8_e0db6a4f-19e4-488c-bc45-9619565bdf57/horizon-log/0.log" Feb 28 14:56:11 crc kubenswrapper[4897]: I0228 14:56:11.200146 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d5c8f94c5-9sc2w_b7c377e3-d32d-49da-801c-155853ae1d70/keystone-api/0.log" Feb 28 14:56:11 crc kubenswrapper[4897]: I0228 14:56:11.245181 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-497mc_ff698979-3e20-4b13-9cae-2b0d353cae40/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:11 crc kubenswrapper[4897]: I0228 14:56:11.587514 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm_e41a407d-96e5-4c5d-8890-fe4cb2f59a0f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:11 crc kubenswrapper[4897]: I0228 14:56:11.642533 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59b7cd74f9-xphhh_cfe88e43-2315-4773-85fa-459dab7fb23d/neutron-httpd/0.log" Feb 28 14:56:11 crc kubenswrapper[4897]: I0228 14:56:11.674760 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59b7cd74f9-xphhh_cfe88e43-2315-4773-85fa-459dab7fb23d/neutron-api/0.log" Feb 28 14:56:11 crc kubenswrapper[4897]: I0228 14:56:11.792376 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_48885530-3df1-42cf-9c7f-2f86a21026a9/setup-container/0.log" Feb 28 14:56:12 crc kubenswrapper[4897]: I0228 14:56:12.032933 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_48885530-3df1-42cf-9c7f-2f86a21026a9/rabbitmq/0.log" Feb 28 14:56:12 crc kubenswrapper[4897]: I0228 14:56:12.092518 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_48885530-3df1-42cf-9c7f-2f86a21026a9/setup-container/0.log" Feb 28 14:56:12 crc kubenswrapper[4897]: I0228 14:56:12.610491 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bf3d7f16-bcfc-4fa4-92d4-9b03f42375de/nova-cell0-conductor-conductor/0.log" Feb 28 14:56:12 crc kubenswrapper[4897]: I0228 14:56:12.961016 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6f3fc432-044c-4be6-b1b3-049e2d2842d5/nova-cell1-conductor-conductor/0.log" Feb 28 14:56:13 crc kubenswrapper[4897]: I0228 14:56:13.282839 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b200a830-20fd-475c-bf9f-7c17ae963355/nova-cell1-novncproxy-novncproxy/0.log" Feb 28 14:56:13 crc kubenswrapper[4897]: I0228 14:56:13.477423 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-ls724_1fc98763-e64a-41e1-a4ff-0c72faa961fe/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:13 crc kubenswrapper[4897]: I0228 14:56:13.526944 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fede0b0b-b487-4e63-9622-4863d3575d89/nova-api-log/0.log" Feb 28 14:56:13 crc kubenswrapper[4897]: I0228 14:56:13.784087 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_151132bb-bcf9-4d40-a72b-5f6b80c23fb1/nova-metadata-log/0.log" Feb 28 14:56:13 crc kubenswrapper[4897]: I0228 14:56:13.970640 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fede0b0b-b487-4e63-9622-4863d3575d89/nova-api-api/0.log" Feb 28 14:56:14 crc kubenswrapper[4897]: I0228 14:56:14.203138 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d7f297ea-652d-47ae-9831-fad10c6127ad/mysql-bootstrap/0.log" Feb 28 14:56:14 crc kubenswrapper[4897]: I0228 14:56:14.226007 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f5c10f14-b08c-4267-8436-22d028c4db66/nova-scheduler-scheduler/0.log" Feb 28 14:56:14 crc kubenswrapper[4897]: I0228 14:56:14.395091 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d7f297ea-652d-47ae-9831-fad10c6127ad/galera/0.log" Feb 28 14:56:14 crc kubenswrapper[4897]: I0228 14:56:14.401813 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d7f297ea-652d-47ae-9831-fad10c6127ad/mysql-bootstrap/0.log" Feb 28 14:56:14 crc kubenswrapper[4897]: I0228 14:56:14.565968 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_db99e06f-c263-4aef-b5c2-330eaed29fd4/mysql-bootstrap/0.log" Feb 28 14:56:14 crc kubenswrapper[4897]: I0228 14:56:14.830837 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_db99e06f-c263-4aef-b5c2-330eaed29fd4/mysql-bootstrap/0.log" Feb 28 14:56:14 crc kubenswrapper[4897]: I0228 14:56:14.838718 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_db99e06f-c263-4aef-b5c2-330eaed29fd4/galera/0.log" Feb 28 14:56:15 crc kubenswrapper[4897]: I0228 14:56:15.003104 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_768007b3-82d1-4b63-b96f-4d8797b46acc/openstackclient/0.log" Feb 28 14:56:15 crc kubenswrapper[4897]: I0228 14:56:15.029453 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jsdwb_cd2fa5a5-caab-4d3d-8324-f6107d50f59f/ovn-controller/0.log" Feb 28 14:56:15 crc kubenswrapper[4897]: I0228 14:56:15.208840 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-598bf_5ab588f4-9fad-44d6-a7e2-2e99b19ef285/openstack-network-exporter/0.log" Feb 28 14:56:15 crc kubenswrapper[4897]: I0228 14:56:15.411887 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ch9bl_995bc563-52dc-4755-b43f-96a2746d8bce/ovsdb-server-init/0.log" Feb 28 14:56:15 crc kubenswrapper[4897]: I0228 14:56:15.614440 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ch9bl_995bc563-52dc-4755-b43f-96a2746d8bce/ovsdb-server-init/0.log" Feb 28 14:56:15 crc kubenswrapper[4897]: I0228 14:56:15.637341 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ch9bl_995bc563-52dc-4755-b43f-96a2746d8bce/ovsdb-server/0.log" Feb 28 14:56:15 crc kubenswrapper[4897]: I0228 14:56:15.948179 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-vskr8_ccec52af-4ae3-42de-bead-6b28a6e8c739/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:15 crc kubenswrapper[4897]: I0228 14:56:15.985014 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_151132bb-bcf9-4d40-a72b-5f6b80c23fb1/nova-metadata-metadata/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.037548 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ch9bl_995bc563-52dc-4755-b43f-96a2746d8bce/ovs-vswitchd/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.144896 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f3afe36e-988c-4fca-8ca8-c24353046ea7/openstack-network-exporter/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.253506 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_03ffdd06-e63d-4a43-96f0-92e2d0e3a89d/openstack-network-exporter/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.279985 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f3afe36e-988c-4fca-8ca8-c24353046ea7/ovn-northd/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.378695 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_03ffdd06-e63d-4a43-96f0-92e2d0e3a89d/ovsdbserver-nb/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.448009 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_48d78132-b30d-4c29-8137-7af1597f8cc6/openstack-network-exporter/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.562351 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_48d78132-b30d-4c29-8137-7af1597f8cc6/ovsdbserver-sb/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.894620 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-778b749bdb-bmqwf_a2f1a9fc-a42b-488a-a7a6-207157fd1205/placement-api/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.910709 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-778b749bdb-bmqwf_a2f1a9fc-a42b-488a-a7a6-207157fd1205/placement-log/0.log" Feb 28 14:56:16 crc kubenswrapper[4897]: I0228 14:56:16.914186 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/init-config-reloader/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.026323 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/init-config-reloader/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.150451 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/config-reloader/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.159869 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/prometheus/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.160015 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/thanos-sidecar/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.397130 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_59883b9c-0fbf-4d9e-84ee-f9456a6f13aa/setup-container/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.513642 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_59883b9c-0fbf-4d9e-84ee-f9456a6f13aa/rabbitmq/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.576755 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_59883b9c-0fbf-4d9e-84ee-f9456a6f13aa/setup-container/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.623129 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd/setup-container/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.846815 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd/rabbitmq/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.862663 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd/setup-container/0.log" Feb 28 14:56:17 crc kubenswrapper[4897]: I0228 14:56:17.908741 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9_0b6d041b-3a22-45fa-bd9e-33dea9dc98aa/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:18 crc kubenswrapper[4897]: I0228 14:56:18.170703 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t9clw_368bd0f8-b828-44ed-a605-3aabab81c9c1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:18 crc kubenswrapper[4897]: I0228 14:56:18.814888 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h_3ec9b581-f18e-4ae6-b520-c19ecfc75ab3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:18 crc kubenswrapper[4897]: I0228 14:56:18.885190 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ccmbs_bfdbf8bc-0180-406e-884b-cfd88b6ae1a3/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.017833 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vrbrt_0a198568-b27e-4e65-bc3f-6b70f3184b6b/ssh-known-hosts-edpm-deployment/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.128032 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7765f74f9-bjr4m_2ea92bb0-3068-4ffe-b85c-ce041cc1911e/proxy-server/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.327035 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gpcgs_41910cc3-f0b4-4e6d-9c2e-562794444c84/swift-ring-rebalance/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.347084 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7765f74f9-bjr4m_2ea92bb0-3068-4ffe-b85c-ce041cc1911e/proxy-httpd/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.546535 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/account-auditor/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.558979 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/account-reaper/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.635299 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/account-replicator/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.698189 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/account-server/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.763042 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/container-auditor/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.830246 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/container-replicator/0.log" Feb 28 14:56:19 crc kubenswrapper[4897]: I0228 14:56:19.860853 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/container-server/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.404832 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-expirer/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.441418 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/container-updater/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.465130 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-replicator/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.530541 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-auditor/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.653625 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-server/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.667996 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/rsync/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.683603 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-updater/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.746552 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/swift-recon-cron/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.834015 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f032f5e9-4992-4586-bd47-0c3da76ecf40/memcached/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.923834 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb_8356fe56-9405-43be-8d6e-3d71c9906864/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:20 crc kubenswrapper[4897]: I0228 14:56:20.984655 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_49f3154b-02e1-4da4-a498-58e7280a8a64/tempest-tests-tempest-tests-runner/0.log" Feb 28 14:56:21 crc kubenswrapper[4897]: I0228 14:56:21.094902 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9b32d426-3313-4f78-9baa-92b8717b8d8e/test-operator-logs-container/0.log" Feb 28 14:56:21 crc kubenswrapper[4897]: I0228 14:56:21.198593 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds_81fd26ee-0f11-49a1-863c-86aefccd7f6d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 14:56:21 crc kubenswrapper[4897]: I0228 14:56:21.456143 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:56:21 crc kubenswrapper[4897]: E0228 14:56:21.456416 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:56:21 crc kubenswrapper[4897]: I0228 14:56:21.984557 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060/watcher-applier/0.log" Feb 28 14:56:22 crc kubenswrapper[4897]: I0228 14:56:22.398750 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_f7a66d06-fda4-4801-8a7e-24acf64224ac/watcher-api-log/0.log" Feb 28 14:56:24 crc kubenswrapper[4897]: I0228 14:56:24.371244 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_f31b98f7-e894-4ba1-99d0-c9f4dfe066a9/watcher-decision-engine/0.log" Feb 28 14:56:25 crc kubenswrapper[4897]: I0228 14:56:25.056248 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_f7a66d06-fda4-4801-8a7e-24acf64224ac/watcher-api/0.log" Feb 28 14:56:29 crc kubenswrapper[4897]: I0228 14:56:29.787590 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 14:56:29 crc kubenswrapper[4897]: I0228 14:56:29.788400 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="db99e06f-c263-4aef-b5c2-330eaed29fd4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 14:56:33 crc kubenswrapper[4897]: I0228 14:56:33.455946 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:56:33 crc kubenswrapper[4897]: E0228 14:56:33.456686 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:56:45 crc kubenswrapper[4897]: I0228 14:56:45.456494 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:56:45 crc kubenswrapper[4897]: E0228 14:56:45.457540 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:56:53 crc kubenswrapper[4897]: I0228 14:56:53.729786 4897 scope.go:117] "RemoveContainer" containerID="314e8ae253181750329ef70cccd577ea25baf610d6401ffe5a076fee22ea987f" Feb 28 14:56:54 crc kubenswrapper[4897]: I0228 14:56:54.510195 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/util/0.log" Feb 28 14:56:54 crc kubenswrapper[4897]: I0228 14:56:54.718598 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/util/0.log" Feb 28 14:56:54 crc kubenswrapper[4897]: I0228 14:56:54.740065 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/pull/0.log" Feb 28 14:56:54 crc kubenswrapper[4897]: I0228 14:56:54.752154 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/pull/0.log" Feb 28 14:56:54 crc kubenswrapper[4897]: I0228 14:56:54.952153 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/pull/0.log" Feb 28 14:56:55 crc kubenswrapper[4897]: I0228 14:56:55.026399 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/util/0.log" Feb 28 14:56:55 crc kubenswrapper[4897]: I0228 14:56:55.046191 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/extract/0.log" Feb 28 14:56:55 crc kubenswrapper[4897]: I0228 14:56:55.525246 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-hgfm4_a78107ef-804f-476a-98f4-195f52927c3d/manager/0.log" Feb 28 14:56:55 crc kubenswrapper[4897]: I0228 14:56:55.848800 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-4tvzl_5863afa6-053e-4d6c-899e-c31dcc30dcf3/manager/0.log" Feb 28 14:56:56 crc kubenswrapper[4897]: I0228 14:56:56.000028 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-pjmt7_e7498ffc-cb24-44e8-b0cb-4ada46db9e4c/manager/0.log" Feb 28 14:56:56 crc kubenswrapper[4897]: I0228 14:56:56.219524 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-qjg9q_cf8aae65-a739-4ab3-8208-ae8ac4ed0671/manager/0.log" Feb 28 14:56:56 crc kubenswrapper[4897]: I0228 14:56:56.782556 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-cfsb9_507c84e1-3826-47ad-93f4-c2d6d726f8b7/manager/0.log" Feb 28 14:56:57 crc kubenswrapper[4897]: I0228 14:56:57.071187 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-bb7d9_3bfb71f8-fd2c-4730-af54-601ec4daebaf/manager/0.log" Feb 28 14:56:57 crc kubenswrapper[4897]: I0228 14:56:57.340739 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-fm9lk_30b14df1-8f3e-427c-b6d9-eb8aeb192213/manager/0.log" Feb 28 14:56:57 crc kubenswrapper[4897]: I0228 14:56:57.515606 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-h65l6_c90ec355-3eb2-43e5-9a39-eed72bb46d1b/manager/0.log" Feb 28 14:56:57 crc kubenswrapper[4897]: I0228 14:56:57.645880 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-d8psr_5ef2847d-3e11-419b-b34c-3f4cb5643af9/manager/0.log" Feb 28 14:56:57 crc kubenswrapper[4897]: I0228 14:56:57.813923 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-6xfvp_30810ec7-8325-4bde-aa9d-ff905addb474/manager/0.log" Feb 28 14:56:57 crc kubenswrapper[4897]: I0228 14:56:57.939463 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-7lr7s_3664d59e-945d-4eb5-9443-296e206a1081/manager/0.log" Feb 28 14:56:58 crc kubenswrapper[4897]: I0228 14:56:58.188473 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-wrf59_b237a99b-2fe2-4804-880b-03494df684d2/manager/0.log" Feb 28 14:56:58 crc kubenswrapper[4897]: I0228 14:56:58.209484 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-wqgwr_a9935d62-a205-4294-a124-313a8437c1ab/manager/0.log" Feb 28 14:56:58 crc kubenswrapper[4897]: I0228 14:56:58.443059 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd_fe6be473-8403-4c9d-abf6-a7a0251326f9/manager/0.log" Feb 28 14:56:58 crc kubenswrapper[4897]: I0228 14:56:58.619413 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-58b8f68975-4gtm4_f3e65b5d-7974-4323-92f1-50f5dbc0fe11/operator/0.log" Feb 28 14:56:58 crc kubenswrapper[4897]: I0228 14:56:58.735368 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9tgxh_e5918346-7c71-4d39-985f-c8893e107670/registry-server/0.log" Feb 28 14:56:59 crc kubenswrapper[4897]: I0228 14:56:59.195756 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-p6pbb_3d635198-c21d-4d2e-9393-ad9b6cdf462f/manager/0.log" Feb 28 14:56:59 crc kubenswrapper[4897]: I0228 14:56:59.260049 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-hkkvm_c434bb35-55df-45b5-9eeb-ab9913f3fd5e/manager/0.log" Feb 28 14:56:59 crc kubenswrapper[4897]: I0228 14:56:59.436528 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-w6zhg_216a4a66-0783-4b6c-9884-370bd3a001a4/operator/0.log" Feb 28 14:56:59 crc kubenswrapper[4897]: I0228 14:56:59.627775 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-dnpdj_8c8044b8-c803-4b5b-916f-34c0c03ab619/manager/0.log" Feb 28 14:56:59 crc kubenswrapper[4897]: I0228 14:56:59.943348 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-v7qbm_37408ab3-7514-42a0-92e8-6c2a2710b9f0/manager/0.log" Feb 28 14:56:59 crc kubenswrapper[4897]: I0228 14:56:59.975235 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-6r8pc_f5e3f361-0ca8-4a8f-8625-8ea90c292ac2/manager/0.log" Feb 28 14:57:00 crc kubenswrapper[4897]: I0228 14:57:00.368113 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-69dbd6f547-4ng5q_c25839b5-c34e-4865-a5ad-4e10355f1953/manager/0.log" Feb 28 14:57:00 crc kubenswrapper[4897]: I0228 14:57:00.456699 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:57:00 crc kubenswrapper[4897]: E0228 14:57:00.457158 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 14:57:00 crc kubenswrapper[4897]: I0228 14:57:00.606876 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d8778d855-4x57f_6532860c-c344-4a74-9189-4382f4865b58/manager/0.log" Feb 28 14:57:04 crc kubenswrapper[4897]: I0228 14:57:04.617276 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-96lzs_1d330dac-b70b-4af0-bfa0-1fba21022fb1/manager/0.log" Feb 28 14:57:12 crc kubenswrapper[4897]: I0228 14:57:12.456813 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 14:57:13 crc kubenswrapper[4897]: I0228 14:57:13.605852 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"56f81e527e46340803698674f23d31062429f04e874c4aa8357907f685c83acc"} Feb 28 14:57:22 crc kubenswrapper[4897]: I0228 14:57:22.929238 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-glzrp_49308413-0bd0-4aef-8d1b-451b077e6996/control-plane-machine-set-operator/0.log" Feb 28 14:57:23 crc kubenswrapper[4897]: I0228 14:57:23.102882 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zkvs9_df2319dd-b85c-4542-bf25-8233ecda9d78/kube-rbac-proxy/0.log" Feb 28 14:57:23 crc kubenswrapper[4897]: I0228 14:57:23.194249 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zkvs9_df2319dd-b85c-4542-bf25-8233ecda9d78/machine-api-operator/0.log" Feb 28 14:57:38 crc kubenswrapper[4897]: I0228 14:57:38.382438 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ld6gk_575a7b09-2bc9-458a-bdbc-169241a67869/cert-manager-controller/0.log" Feb 28 14:57:38 crc kubenswrapper[4897]: I0228 14:57:38.513541 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f4grq_b868e69f-c259-4f0e-9f12-7b0be2e26d03/cert-manager-cainjector/0.log" Feb 28 14:57:38 crc kubenswrapper[4897]: I0228 14:57:38.659133 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5vvcp_80a798fa-b6e2-4063-95a5-56c55dec24b0/cert-manager-webhook/0.log" Feb 28 14:57:53 crc kubenswrapper[4897]: I0228 14:57:53.050434 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-dmxhv_0b30e3b3-0280-45c0-ad26-00ab9dff49ce/nmstate-console-plugin/0.log" Feb 28 14:57:53 crc kubenswrapper[4897]: I0228 14:57:53.212170 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-w8lgm_b1e7c059-1db9-417a-8bd9-b5157303f3af/nmstate-handler/0.log" Feb 28 14:57:53 crc kubenswrapper[4897]: I0228 14:57:53.287252 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-64kn2_a16c5c73-6515-4d5b-898e-aa6d3940f0b1/kube-rbac-proxy/0.log" Feb 28 14:57:53 crc kubenswrapper[4897]: I0228 14:57:53.323822 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-64kn2_a16c5c73-6515-4d5b-898e-aa6d3940f0b1/nmstate-metrics/0.log" Feb 28 14:57:53 crc kubenswrapper[4897]: I0228 14:57:53.456847 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-468lb_ce9efcef-4478-4127-a41e-9e9960084a46/nmstate-operator/0.log" Feb 28 14:57:53 crc kubenswrapper[4897]: I0228 14:57:53.532888 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-qtkdc_5ae61471-c126-4bb0-b7c5-1b56f1686ecc/nmstate-webhook/0.log" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.181038 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538178-zztsf"] Feb 28 14:58:00 crc kubenswrapper[4897]: E0228 14:58:00.182414 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614e76f9-7f4a-4075-bb88-e9b651ef700f" containerName="oc" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.182438 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="614e76f9-7f4a-4075-bb88-e9b651ef700f" containerName="oc" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.182817 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="614e76f9-7f4a-4075-bb88-e9b651ef700f" containerName="oc" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.184015 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538178-zztsf" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.187086 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.187264 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.187471 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.195489 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538178-zztsf"] Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.345920 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb7b8\" (UniqueName: \"kubernetes.io/projected/300d4721-b640-4652-b2d2-9a3370f74c09-kube-api-access-jb7b8\") pod \"auto-csr-approver-29538178-zztsf\" (UID: \"300d4721-b640-4652-b2d2-9a3370f74c09\") " pod="openshift-infra/auto-csr-approver-29538178-zztsf" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.449201 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb7b8\" (UniqueName: \"kubernetes.io/projected/300d4721-b640-4652-b2d2-9a3370f74c09-kube-api-access-jb7b8\") pod \"auto-csr-approver-29538178-zztsf\" (UID: \"300d4721-b640-4652-b2d2-9a3370f74c09\") " pod="openshift-infra/auto-csr-approver-29538178-zztsf" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.477083 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb7b8\" (UniqueName: \"kubernetes.io/projected/300d4721-b640-4652-b2d2-9a3370f74c09-kube-api-access-jb7b8\") pod \"auto-csr-approver-29538178-zztsf\" (UID: \"300d4721-b640-4652-b2d2-9a3370f74c09\") " pod="openshift-infra/auto-csr-approver-29538178-zztsf" Feb 28 14:58:00 crc kubenswrapper[4897]: I0228 14:58:00.504235 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538178-zztsf" Feb 28 14:58:01 crc kubenswrapper[4897]: I0228 14:58:01.015140 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538178-zztsf"] Feb 28 14:58:01 crc kubenswrapper[4897]: I0228 14:58:01.127394 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538178-zztsf" event={"ID":"300d4721-b640-4652-b2d2-9a3370f74c09","Type":"ContainerStarted","Data":"550258cd4fef1cbd24905e9fd1d335887e4859776ca3e3c8d7e5c12e6ecca018"} Feb 28 14:58:08 crc kubenswrapper[4897]: I0228 14:58:08.635020 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w78wk_13c34d90-e126-4392-9f0d-31436773d681/prometheus-operator/0.log" Feb 28 14:58:08 crc kubenswrapper[4897]: I0228 14:58:08.763456 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-767759c544-pwwvk_77de0da5-c400-4927-bd0f-15d2ba642291/prometheus-operator-admission-webhook/0.log" Feb 28 14:58:08 crc kubenswrapper[4897]: I0228 14:58:08.803983 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-767759c544-sq8hd_c28960c4-dba8-4bc2-8695-13bc86523823/prometheus-operator-admission-webhook/0.log" Feb 28 14:58:08 crc kubenswrapper[4897]: I0228 14:58:08.940861 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-qkkz2_b1a1168a-8c63-4e9c-aefc-732c90395b55/operator/0.log" Feb 28 14:58:08 crc kubenswrapper[4897]: I0228 14:58:08.989318 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tr862_799fd3ea-6ae8-4568-a69b-3e8c2a706b76/perses-operator/0.log" Feb 28 14:58:24 crc kubenswrapper[4897]: I0228 14:58:24.417964 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538178-zztsf" event={"ID":"300d4721-b640-4652-b2d2-9a3370f74c09","Type":"ContainerStarted","Data":"76c689ad5fcef6a7fc072bd442576584522db302420fa6d3a59aa7e51f4cf2cd"} Feb 28 14:58:24 crc kubenswrapper[4897]: I0228 14:58:24.430828 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538178-zztsf" podStartSLOduration=1.494805118 podStartE2EDuration="24.430812707s" podCreationTimestamp="2026-02-28 14:58:00 +0000 UTC" firstStartedPulling="2026-02-28 14:58:01.011749269 +0000 UTC m=+6095.254069916" lastFinishedPulling="2026-02-28 14:58:23.947756848 +0000 UTC m=+6118.190077505" observedRunningTime="2026-02-28 14:58:24.429897042 +0000 UTC m=+6118.672217709" watchObservedRunningTime="2026-02-28 14:58:24.430812707 +0000 UTC m=+6118.673133364" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.155332 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-jz56q_5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce/kube-rbac-proxy/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.192628 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-jz56q_5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce/controller/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.336559 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-frr-files/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.426510 4897 generic.go:334] "Generic (PLEG): container finished" podID="300d4721-b640-4652-b2d2-9a3370f74c09" containerID="76c689ad5fcef6a7fc072bd442576584522db302420fa6d3a59aa7e51f4cf2cd" exitCode=0 Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.426548 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538178-zztsf" event={"ID":"300d4721-b640-4652-b2d2-9a3370f74c09","Type":"ContainerDied","Data":"76c689ad5fcef6a7fc072bd442576584522db302420fa6d3a59aa7e51f4cf2cd"} Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.571799 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-reloader/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.583256 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-reloader/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.600232 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-metrics/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.625121 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-frr-files/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.786627 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-reloader/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.797772 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-metrics/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.857646 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-frr-files/0.log" Feb 28 14:58:25 crc kubenswrapper[4897]: I0228 14:58:25.866789 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-metrics/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.069954 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-frr-files/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.117148 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/controller/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.144026 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-metrics/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.146957 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-reloader/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.314665 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/frr-metrics/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.318488 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/kube-rbac-proxy/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.359580 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/kube-rbac-proxy-frr/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.505993 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/reloader/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.637667 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-m4dnz_6019677c-387b-4cb8-9c0f-4607f2b5971c/frr-k8s-webhook-server/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.801800 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538178-zztsf" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.937695 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb7b8\" (UniqueName: \"kubernetes.io/projected/300d4721-b640-4652-b2d2-9a3370f74c09-kube-api-access-jb7b8\") pod \"300d4721-b640-4652-b2d2-9a3370f74c09\" (UID: \"300d4721-b640-4652-b2d2-9a3370f74c09\") " Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.941110 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7996b9d6bf-xmdxr_1c3404c1-8c8b-4cf9-89dd-8f370ad776e2/manager/0.log" Feb 28 14:58:26 crc kubenswrapper[4897]: I0228 14:58:26.954473 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300d4721-b640-4652-b2d2-9a3370f74c09-kube-api-access-jb7b8" (OuterVolumeSpecName: "kube-api-access-jb7b8") pod "300d4721-b640-4652-b2d2-9a3370f74c09" (UID: "300d4721-b640-4652-b2d2-9a3370f74c09"). InnerVolumeSpecName "kube-api-access-jb7b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.040209 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb7b8\" (UniqueName: \"kubernetes.io/projected/300d4721-b640-4652-b2d2-9a3370f74c09-kube-api-access-jb7b8\") on node \"crc\" DevicePath \"\"" Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.140986 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cc84c5f94-tk95x_3efe124f-7df2-4c2b-ad84-f8674f4d4fb8/webhook-server/0.log" Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.164693 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xqdlt_f599a5af-52e7-429e-9159-2959003096c7/kube-rbac-proxy/0.log" Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.447153 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538178-zztsf" event={"ID":"300d4721-b640-4652-b2d2-9a3370f74c09","Type":"ContainerDied","Data":"550258cd4fef1cbd24905e9fd1d335887e4859776ca3e3c8d7e5c12e6ecca018"} Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.447190 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="550258cd4fef1cbd24905e9fd1d335887e4859776ca3e3c8d7e5c12e6ecca018" Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.447251 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538178-zztsf" Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.507954 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538172-vlj6b"] Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.527248 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538172-vlj6b"] Feb 28 14:58:27 crc kubenswrapper[4897]: I0228 14:58:27.778758 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xqdlt_f599a5af-52e7-429e-9159-2959003096c7/speaker/0.log" Feb 28 14:58:28 crc kubenswrapper[4897]: I0228 14:58:28.374659 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/frr/0.log" Feb 28 14:58:28 crc kubenswrapper[4897]: I0228 14:58:28.470776 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c1940a-ce44-4706-bcff-213ac2986225" path="/var/lib/kubelet/pods/49c1940a-ce44-4706-bcff-213ac2986225/volumes" Feb 28 14:58:42 crc kubenswrapper[4897]: I0228 14:58:42.399565 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/util/0.log" Feb 28 14:58:42 crc kubenswrapper[4897]: I0228 14:58:42.592652 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/pull/0.log" Feb 28 14:58:42 crc kubenswrapper[4897]: I0228 14:58:42.617856 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/pull/0.log" Feb 28 14:58:42 crc kubenswrapper[4897]: I0228 14:58:42.622440 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/util/0.log" Feb 28 14:58:42 crc kubenswrapper[4897]: I0228 14:58:42.787713 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/pull/0.log" Feb 28 14:58:42 crc kubenswrapper[4897]: I0228 14:58:42.810467 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/util/0.log" Feb 28 14:58:42 crc kubenswrapper[4897]: I0228 14:58:42.830730 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/extract/0.log" Feb 28 14:58:42 crc kubenswrapper[4897]: I0228 14:58:42.974007 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/util/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.164178 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/util/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.169685 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/pull/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.170064 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/pull/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.350072 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/util/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.360490 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/pull/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.409832 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/extract/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.532673 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-utilities/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.787190 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-content/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.804419 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-content/0.log" Feb 28 14:58:43 crc kubenswrapper[4897]: I0228 14:58:43.816414 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-utilities/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.016512 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-content/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.019649 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-utilities/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.225536 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-utilities/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.411007 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-content/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.466948 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-utilities/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.484842 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-content/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.665069 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-content/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.690511 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-utilities/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.801036 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/registry-server/0.log" Feb 28 14:58:44 crc kubenswrapper[4897]: I0228 14:58:44.911056 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/util/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.140738 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/util/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.249055 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/pull/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.258746 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/pull/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.445723 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/util/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.479606 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/pull/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.501412 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/extract/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.559767 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/registry-server/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.664497 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b4nxz_b38ea4e8-edc9-4c30-8189-dbcc29bc677e/marketplace-operator/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.764366 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-utilities/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.941123 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-content/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.945645 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-utilities/0.log" Feb 28 14:58:45 crc kubenswrapper[4897]: I0228 14:58:45.949755 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-content/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.140894 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-utilities/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.152117 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-content/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.301948 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-utilities/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.365098 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/registry-server/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.492675 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-utilities/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.503429 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-content/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.518578 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-content/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.687420 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-utilities/0.log" Feb 28 14:58:46 crc kubenswrapper[4897]: I0228 14:58:46.691202 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-content/0.log" Feb 28 14:58:47 crc kubenswrapper[4897]: I0228 14:58:47.428223 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/registry-server/0.log" Feb 28 14:58:54 crc kubenswrapper[4897]: I0228 14:58:54.014438 4897 scope.go:117] "RemoveContainer" containerID="3b408f58627fc54ec009a9e35c89c2146358aee8d804310cab290d4183f91e93" Feb 28 14:59:01 crc kubenswrapper[4897]: I0228 14:59:01.272177 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-767759c544-pwwvk_77de0da5-c400-4927-bd0f-15d2ba642291/prometheus-operator-admission-webhook/0.log" Feb 28 14:59:01 crc kubenswrapper[4897]: I0228 14:59:01.275255 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w78wk_13c34d90-e126-4392-9f0d-31436773d681/prometheus-operator/0.log" Feb 28 14:59:01 crc kubenswrapper[4897]: I0228 14:59:01.275862 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-767759c544-sq8hd_c28960c4-dba8-4bc2-8695-13bc86523823/prometheus-operator-admission-webhook/0.log" Feb 28 14:59:01 crc kubenswrapper[4897]: I0228 14:59:01.462704 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tr862_799fd3ea-6ae8-4568-a69b-3e8c2a706b76/perses-operator/0.log" Feb 28 14:59:01 crc kubenswrapper[4897]: I0228 14:59:01.467035 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-qkkz2_b1a1168a-8c63-4e9c-aefc-732c90395b55/operator/0.log" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.168999 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c292n"] Feb 28 14:59:20 crc kubenswrapper[4897]: E0228 14:59:20.169910 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300d4721-b640-4652-b2d2-9a3370f74c09" containerName="oc" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.169924 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="300d4721-b640-4652-b2d2-9a3370f74c09" containerName="oc" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.170139 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="300d4721-b640-4652-b2d2-9a3370f74c09" containerName="oc" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.171616 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.189541 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvzks\" (UniqueName: \"kubernetes.io/projected/0ee21f96-7709-457c-9ba4-20260bb1792d-kube-api-access-fvzks\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.189736 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-catalog-content\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.189772 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-utilities\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.200707 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c292n"] Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.291244 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-catalog-content\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.291350 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-utilities\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.291383 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvzks\" (UniqueName: \"kubernetes.io/projected/0ee21f96-7709-457c-9ba4-20260bb1792d-kube-api-access-fvzks\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.291925 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-utilities\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.291933 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-catalog-content\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.310089 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvzks\" (UniqueName: \"kubernetes.io/projected/0ee21f96-7709-457c-9ba4-20260bb1792d-kube-api-access-fvzks\") pod \"community-operators-c292n\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:20 crc kubenswrapper[4897]: I0228 14:59:20.494301 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:21 crc kubenswrapper[4897]: I0228 14:59:21.048798 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c292n"] Feb 28 14:59:21 crc kubenswrapper[4897]: I0228 14:59:21.997341 4897 generic.go:334] "Generic (PLEG): container finished" podID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerID="9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d" exitCode=0 Feb 28 14:59:21 crc kubenswrapper[4897]: I0228 14:59:21.997418 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c292n" event={"ID":"0ee21f96-7709-457c-9ba4-20260bb1792d","Type":"ContainerDied","Data":"9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d"} Feb 28 14:59:21 crc kubenswrapper[4897]: I0228 14:59:21.999468 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c292n" event={"ID":"0ee21f96-7709-457c-9ba4-20260bb1792d","Type":"ContainerStarted","Data":"71be4f4470908610f2856083ae2b018ee560a5081495d02ab93202a88b786bcf"} Feb 28 14:59:22 crc kubenswrapper[4897]: E0228 14:59:22.578398 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 14:59:22 crc kubenswrapper[4897]: E0228 14:59:22.579388 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvzks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c292n_openshift-marketplace(0ee21f96-7709-457c-9ba4-20260bb1792d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 14:59:22 crc kubenswrapper[4897]: E0228 14:59:22.581467 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=2fcaf971733b21208658251ecc1f3cab7e5364103325abf2880446e453190c33/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-c292n" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" Feb 28 14:59:23 crc kubenswrapper[4897]: E0228 14:59:23.009946 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c292n" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" Feb 28 14:59:33 crc kubenswrapper[4897]: I0228 14:59:33.371535 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 14:59:33 crc kubenswrapper[4897]: I0228 14:59:33.372186 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 14:59:38 crc kubenswrapper[4897]: I0228 14:59:38.200603 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c292n" event={"ID":"0ee21f96-7709-457c-9ba4-20260bb1792d","Type":"ContainerStarted","Data":"e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1"} Feb 28 14:59:39 crc kubenswrapper[4897]: I0228 14:59:39.217728 4897 generic.go:334] "Generic (PLEG): container finished" podID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerID="e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1" exitCode=0 Feb 28 14:59:39 crc kubenswrapper[4897]: I0228 14:59:39.217991 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c292n" event={"ID":"0ee21f96-7709-457c-9ba4-20260bb1792d","Type":"ContainerDied","Data":"e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1"} Feb 28 14:59:40 crc kubenswrapper[4897]: I0228 14:59:40.231980 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c292n" event={"ID":"0ee21f96-7709-457c-9ba4-20260bb1792d","Type":"ContainerStarted","Data":"2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872"} Feb 28 14:59:40 crc kubenswrapper[4897]: I0228 14:59:40.267048 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c292n" podStartSLOduration=2.641631295 podStartE2EDuration="20.267029791s" podCreationTimestamp="2026-02-28 14:59:20 +0000 UTC" firstStartedPulling="2026-02-28 14:59:21.999487068 +0000 UTC m=+6176.241807725" lastFinishedPulling="2026-02-28 14:59:39.624885544 +0000 UTC m=+6193.867206221" observedRunningTime="2026-02-28 14:59:40.260607569 +0000 UTC m=+6194.502928226" watchObservedRunningTime="2026-02-28 14:59:40.267029791 +0000 UTC m=+6194.509350448" Feb 28 14:59:40 crc kubenswrapper[4897]: I0228 14:59:40.494923 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:40 crc kubenswrapper[4897]: I0228 14:59:40.495041 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:41 crc kubenswrapper[4897]: I0228 14:59:41.581013 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c292n" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="registry-server" probeResult="failure" output=< Feb 28 14:59:41 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 14:59:41 crc kubenswrapper[4897]: > Feb 28 14:59:50 crc kubenswrapper[4897]: I0228 14:59:50.566676 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:50 crc kubenswrapper[4897]: I0228 14:59:50.632467 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:51 crc kubenswrapper[4897]: I0228 14:59:51.378217 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c292n"] Feb 28 14:59:52 crc kubenswrapper[4897]: I0228 14:59:52.377755 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c292n" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="registry-server" containerID="cri-o://2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872" gracePeriod=2 Feb 28 14:59:52 crc kubenswrapper[4897]: I0228 14:59:52.971278 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.113906 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-utilities\") pod \"0ee21f96-7709-457c-9ba4-20260bb1792d\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.114082 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-catalog-content\") pod \"0ee21f96-7709-457c-9ba4-20260bb1792d\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.114131 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvzks\" (UniqueName: \"kubernetes.io/projected/0ee21f96-7709-457c-9ba4-20260bb1792d-kube-api-access-fvzks\") pod \"0ee21f96-7709-457c-9ba4-20260bb1792d\" (UID: \"0ee21f96-7709-457c-9ba4-20260bb1792d\") " Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.114726 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-utilities" (OuterVolumeSpecName: "utilities") pod "0ee21f96-7709-457c-9ba4-20260bb1792d" (UID: "0ee21f96-7709-457c-9ba4-20260bb1792d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.122182 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee21f96-7709-457c-9ba4-20260bb1792d-kube-api-access-fvzks" (OuterVolumeSpecName: "kube-api-access-fvzks") pod "0ee21f96-7709-457c-9ba4-20260bb1792d" (UID: "0ee21f96-7709-457c-9ba4-20260bb1792d"). InnerVolumeSpecName "kube-api-access-fvzks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.126045 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.126083 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvzks\" (UniqueName: \"kubernetes.io/projected/0ee21f96-7709-457c-9ba4-20260bb1792d-kube-api-access-fvzks\") on node \"crc\" DevicePath \"\"" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.193161 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ee21f96-7709-457c-9ba4-20260bb1792d" (UID: "0ee21f96-7709-457c-9ba4-20260bb1792d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.228359 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee21f96-7709-457c-9ba4-20260bb1792d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.394622 4897 generic.go:334] "Generic (PLEG): container finished" podID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerID="2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872" exitCode=0 Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.394839 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c292n" event={"ID":"0ee21f96-7709-457c-9ba4-20260bb1792d","Type":"ContainerDied","Data":"2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872"} Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.394875 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c292n" event={"ID":"0ee21f96-7709-457c-9ba4-20260bb1792d","Type":"ContainerDied","Data":"71be4f4470908610f2856083ae2b018ee560a5081495d02ab93202a88b786bcf"} Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.394894 4897 scope.go:117] "RemoveContainer" containerID="2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.395058 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c292n" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.424382 4897 scope.go:117] "RemoveContainer" containerID="e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.442641 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c292n"] Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.451737 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c292n"] Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.459022 4897 scope.go:117] "RemoveContainer" containerID="9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.501955 4897 scope.go:117] "RemoveContainer" containerID="2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872" Feb 28 14:59:53 crc kubenswrapper[4897]: E0228 14:59:53.505570 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872\": container with ID starting with 2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872 not found: ID does not exist" containerID="2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.505744 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872"} err="failed to get container status \"2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872\": rpc error: code = NotFound desc = could not find container \"2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872\": container with ID starting with 2db53a4c293d28f4e3f334da4e18d7b465b0262b0bc70e91774cb4edc0547872 not found: ID does not exist" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.505883 4897 scope.go:117] "RemoveContainer" containerID="e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1" Feb 28 14:59:53 crc kubenswrapper[4897]: E0228 14:59:53.506441 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1\": container with ID starting with e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1 not found: ID does not exist" containerID="e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.506586 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1"} err="failed to get container status \"e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1\": rpc error: code = NotFound desc = could not find container \"e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1\": container with ID starting with e4aaeb24452405eb83c591c2189ce176515152f1f99a37b2da50475efdc652c1 not found: ID does not exist" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.506707 4897 scope.go:117] "RemoveContainer" containerID="9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d" Feb 28 14:59:53 crc kubenswrapper[4897]: E0228 14:59:53.507071 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d\": container with ID starting with 9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d not found: ID does not exist" containerID="9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d" Feb 28 14:59:53 crc kubenswrapper[4897]: I0228 14:59:53.507205 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d"} err="failed to get container status \"9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d\": rpc error: code = NotFound desc = could not find container \"9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d\": container with ID starting with 9757754c452224456714243324062aeabc5c2fe9500342be57a6e5734501916d not found: ID does not exist" Feb 28 14:59:54 crc kubenswrapper[4897]: I0228 14:59:54.479252 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" path="/var/lib/kubelet/pods/0ee21f96-7709-457c-9ba4-20260bb1792d/volumes" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.235690 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2"] Feb 28 15:00:00 crc kubenswrapper[4897]: E0228 15:00:00.237420 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="extract-content" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.237446 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="extract-content" Feb 28 15:00:00 crc kubenswrapper[4897]: E0228 15:00:00.237516 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="extract-utilities" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.237530 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="extract-utilities" Feb 28 15:00:00 crc kubenswrapper[4897]: E0228 15:00:00.237573 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="registry-server" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.237587 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="registry-server" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.237977 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee21f96-7709-457c-9ba4-20260bb1792d" containerName="registry-server" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.239183 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.244411 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.244559 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.252100 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538180-vspx4"] Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.253920 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538180-vspx4" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.256244 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.259023 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.259074 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.266137 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538180-vspx4"] Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.293701 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2"] Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.410382 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431ff150-604f-491d-84aa-0e5f72c08ee6-secret-volume\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.410478 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf4jc\" (UniqueName: \"kubernetes.io/projected/dfef2bf2-3a6c-4119-8fd2-159efa5e45d1-kube-api-access-rf4jc\") pod \"auto-csr-approver-29538180-vspx4\" (UID: \"dfef2bf2-3a6c-4119-8fd2-159efa5e45d1\") " pod="openshift-infra/auto-csr-approver-29538180-vspx4" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.411221 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cvvr\" (UniqueName: \"kubernetes.io/projected/431ff150-604f-491d-84aa-0e5f72c08ee6-kube-api-access-4cvvr\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.411349 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431ff150-604f-491d-84aa-0e5f72c08ee6-config-volume\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.513886 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431ff150-604f-491d-84aa-0e5f72c08ee6-secret-volume\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.513962 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf4jc\" (UniqueName: \"kubernetes.io/projected/dfef2bf2-3a6c-4119-8fd2-159efa5e45d1-kube-api-access-rf4jc\") pod \"auto-csr-approver-29538180-vspx4\" (UID: \"dfef2bf2-3a6c-4119-8fd2-159efa5e45d1\") " pod="openshift-infra/auto-csr-approver-29538180-vspx4" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.514164 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cvvr\" (UniqueName: \"kubernetes.io/projected/431ff150-604f-491d-84aa-0e5f72c08ee6-kube-api-access-4cvvr\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.514219 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431ff150-604f-491d-84aa-0e5f72c08ee6-config-volume\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.516130 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431ff150-604f-491d-84aa-0e5f72c08ee6-config-volume\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.534376 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431ff150-604f-491d-84aa-0e5f72c08ee6-secret-volume\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.550275 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf4jc\" (UniqueName: \"kubernetes.io/projected/dfef2bf2-3a6c-4119-8fd2-159efa5e45d1-kube-api-access-rf4jc\") pod \"auto-csr-approver-29538180-vspx4\" (UID: \"dfef2bf2-3a6c-4119-8fd2-159efa5e45d1\") " pod="openshift-infra/auto-csr-approver-29538180-vspx4" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.555216 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cvvr\" (UniqueName: \"kubernetes.io/projected/431ff150-604f-491d-84aa-0e5f72c08ee6-kube-api-access-4cvvr\") pod \"collect-profiles-29538180-7gkn2\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.568005 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.578670 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538180-vspx4" Feb 28 15:00:00 crc kubenswrapper[4897]: I0228 15:00:00.995433 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538180-vspx4"] Feb 28 15:00:01 crc kubenswrapper[4897]: I0228 15:00:01.092253 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2"] Feb 28 15:00:01 crc kubenswrapper[4897]: W0228 15:00:01.094294 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod431ff150_604f_491d_84aa_0e5f72c08ee6.slice/crio-b4d7bcb467509e3add60e4f33601e513188a0fa51229b28438db9236368a9125 WatchSource:0}: Error finding container b4d7bcb467509e3add60e4f33601e513188a0fa51229b28438db9236368a9125: Status 404 returned error can't find the container with id b4d7bcb467509e3add60e4f33601e513188a0fa51229b28438db9236368a9125 Feb 28 15:00:01 crc kubenswrapper[4897]: I0228 15:00:01.509933 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" event={"ID":"431ff150-604f-491d-84aa-0e5f72c08ee6","Type":"ContainerStarted","Data":"0a9ec1fe0e4d8679d855fc1a569f1dd088788af0b669b369443cb687fb1fe3ea"} Feb 28 15:00:01 crc kubenswrapper[4897]: I0228 15:00:01.510361 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" event={"ID":"431ff150-604f-491d-84aa-0e5f72c08ee6","Type":"ContainerStarted","Data":"b4d7bcb467509e3add60e4f33601e513188a0fa51229b28438db9236368a9125"} Feb 28 15:00:01 crc kubenswrapper[4897]: I0228 15:00:01.510813 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538180-vspx4" event={"ID":"dfef2bf2-3a6c-4119-8fd2-159efa5e45d1","Type":"ContainerStarted","Data":"e7ce21621194e60d2b39a3ba5f7b35d398e971f5584494d1085254dd256c24a8"} Feb 28 15:00:01 crc kubenswrapper[4897]: I0228 15:00:01.571068 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" podStartSLOduration=1.5710456019999999 podStartE2EDuration="1.571045602s" podCreationTimestamp="2026-02-28 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 15:00:01.537226913 +0000 UTC m=+6215.779547600" watchObservedRunningTime="2026-02-28 15:00:01.571045602 +0000 UTC m=+6215.813366259" Feb 28 15:00:02 crc kubenswrapper[4897]: I0228 15:00:02.525688 4897 generic.go:334] "Generic (PLEG): container finished" podID="431ff150-604f-491d-84aa-0e5f72c08ee6" containerID="0a9ec1fe0e4d8679d855fc1a569f1dd088788af0b669b369443cb687fb1fe3ea" exitCode=0 Feb 28 15:00:02 crc kubenswrapper[4897]: I0228 15:00:02.525789 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" event={"ID":"431ff150-604f-491d-84aa-0e5f72c08ee6","Type":"ContainerDied","Data":"0a9ec1fe0e4d8679d855fc1a569f1dd088788af0b669b369443cb687fb1fe3ea"} Feb 28 15:00:03 crc kubenswrapper[4897]: I0228 15:00:03.370569 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:00:03 crc kubenswrapper[4897]: I0228 15:00:03.370807 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:00:03 crc kubenswrapper[4897]: I0228 15:00:03.988720 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.022152 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431ff150-604f-491d-84aa-0e5f72c08ee6-secret-volume\") pod \"431ff150-604f-491d-84aa-0e5f72c08ee6\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.022447 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cvvr\" (UniqueName: \"kubernetes.io/projected/431ff150-604f-491d-84aa-0e5f72c08ee6-kube-api-access-4cvvr\") pod \"431ff150-604f-491d-84aa-0e5f72c08ee6\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.022507 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431ff150-604f-491d-84aa-0e5f72c08ee6-config-volume\") pod \"431ff150-604f-491d-84aa-0e5f72c08ee6\" (UID: \"431ff150-604f-491d-84aa-0e5f72c08ee6\") " Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.023430 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/431ff150-604f-491d-84aa-0e5f72c08ee6-config-volume" (OuterVolumeSpecName: "config-volume") pod "431ff150-604f-491d-84aa-0e5f72c08ee6" (UID: "431ff150-604f-491d-84aa-0e5f72c08ee6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.033788 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431ff150-604f-491d-84aa-0e5f72c08ee6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "431ff150-604f-491d-84aa-0e5f72c08ee6" (UID: "431ff150-604f-491d-84aa-0e5f72c08ee6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.039461 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431ff150-604f-491d-84aa-0e5f72c08ee6-kube-api-access-4cvvr" (OuterVolumeSpecName: "kube-api-access-4cvvr") pod "431ff150-604f-491d-84aa-0e5f72c08ee6" (UID: "431ff150-604f-491d-84aa-0e5f72c08ee6"). InnerVolumeSpecName "kube-api-access-4cvvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.124982 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431ff150-604f-491d-84aa-0e5f72c08ee6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.125006 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cvvr\" (UniqueName: \"kubernetes.io/projected/431ff150-604f-491d-84aa-0e5f72c08ee6-kube-api-access-4cvvr\") on node \"crc\" DevicePath \"\"" Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.125016 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431ff150-604f-491d-84aa-0e5f72c08ee6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.559849 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" event={"ID":"431ff150-604f-491d-84aa-0e5f72c08ee6","Type":"ContainerDied","Data":"b4d7bcb467509e3add60e4f33601e513188a0fa51229b28438db9236368a9125"} Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.560121 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4d7bcb467509e3add60e4f33601e513188a0fa51229b28438db9236368a9125" Feb 28 15:00:04 crc kubenswrapper[4897]: I0228 15:00:04.560178 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538180-7gkn2" Feb 28 15:00:05 crc kubenswrapper[4897]: I0228 15:00:05.090148 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl"] Feb 28 15:00:05 crc kubenswrapper[4897]: I0228 15:00:05.097982 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538135-crxwl"] Feb 28 15:00:06 crc kubenswrapper[4897]: I0228 15:00:06.502955 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0efa1812-7587-40b8-8577-769f9208e820" path="/var/lib/kubelet/pods/0efa1812-7587-40b8-8577-769f9208e820/volumes" Feb 28 15:00:11 crc kubenswrapper[4897]: I0228 15:00:11.647011 4897 generic.go:334] "Generic (PLEG): container finished" podID="dfef2bf2-3a6c-4119-8fd2-159efa5e45d1" containerID="59909f934cebc2d05f1aa1faa656a9f3796d48e62cd28f6f1f3953068f3f8f65" exitCode=0 Feb 28 15:00:11 crc kubenswrapper[4897]: I0228 15:00:11.647189 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538180-vspx4" event={"ID":"dfef2bf2-3a6c-4119-8fd2-159efa5e45d1","Type":"ContainerDied","Data":"59909f934cebc2d05f1aa1faa656a9f3796d48e62cd28f6f1f3953068f3f8f65"} Feb 28 15:00:13 crc kubenswrapper[4897]: I0228 15:00:13.206743 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538180-vspx4" Feb 28 15:00:13 crc kubenswrapper[4897]: I0228 15:00:13.363799 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf4jc\" (UniqueName: \"kubernetes.io/projected/dfef2bf2-3a6c-4119-8fd2-159efa5e45d1-kube-api-access-rf4jc\") pod \"dfef2bf2-3a6c-4119-8fd2-159efa5e45d1\" (UID: \"dfef2bf2-3a6c-4119-8fd2-159efa5e45d1\") " Feb 28 15:00:13 crc kubenswrapper[4897]: I0228 15:00:13.373200 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfef2bf2-3a6c-4119-8fd2-159efa5e45d1-kube-api-access-rf4jc" (OuterVolumeSpecName: "kube-api-access-rf4jc") pod "dfef2bf2-3a6c-4119-8fd2-159efa5e45d1" (UID: "dfef2bf2-3a6c-4119-8fd2-159efa5e45d1"). InnerVolumeSpecName "kube-api-access-rf4jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:00:13 crc kubenswrapper[4897]: I0228 15:00:13.465707 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf4jc\" (UniqueName: \"kubernetes.io/projected/dfef2bf2-3a6c-4119-8fd2-159efa5e45d1-kube-api-access-rf4jc\") on node \"crc\" DevicePath \"\"" Feb 28 15:00:13 crc kubenswrapper[4897]: I0228 15:00:13.670125 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538180-vspx4" event={"ID":"dfef2bf2-3a6c-4119-8fd2-159efa5e45d1","Type":"ContainerDied","Data":"e7ce21621194e60d2b39a3ba5f7b35d398e971f5584494d1085254dd256c24a8"} Feb 28 15:00:13 crc kubenswrapper[4897]: I0228 15:00:13.670171 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538180-vspx4" Feb 28 15:00:13 crc kubenswrapper[4897]: I0228 15:00:13.670188 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ce21621194e60d2b39a3ba5f7b35d398e971f5584494d1085254dd256c24a8" Feb 28 15:00:14 crc kubenswrapper[4897]: I0228 15:00:14.300650 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538174-4vxkv"] Feb 28 15:00:14 crc kubenswrapper[4897]: I0228 15:00:14.316756 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538174-4vxkv"] Feb 28 15:00:14 crc kubenswrapper[4897]: I0228 15:00:14.477800 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdecfdb7-616d-4c3a-8758-4ef539cb2db5" path="/var/lib/kubelet/pods/bdecfdb7-616d-4c3a-8758-4ef539cb2db5/volumes" Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.371246 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.371847 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.371912 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.372939 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56f81e527e46340803698674f23d31062429f04e874c4aa8357907f685c83acc"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.373030 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://56f81e527e46340803698674f23d31062429f04e874c4aa8357907f685c83acc" gracePeriod=600 Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.955267 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="56f81e527e46340803698674f23d31062429f04e874c4aa8357907f685c83acc" exitCode=0 Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.955359 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"56f81e527e46340803698674f23d31062429f04e874c4aa8357907f685c83acc"} Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.955664 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899"} Feb 28 15:00:33 crc kubenswrapper[4897]: I0228 15:00:33.955690 4897 scope.go:117] "RemoveContainer" containerID="eeaafc379056961bcfeca41078c7ab438aa415eb103c6d894f500936af0c4d8e" Feb 28 15:00:54 crc kubenswrapper[4897]: I0228 15:00:54.176433 4897 scope.go:117] "RemoveContainer" containerID="e96c9adfe5573b24f429a481178ee76850a7a701fe7e503c7dac101fbe0ece46" Feb 28 15:00:54 crc kubenswrapper[4897]: I0228 15:00:54.214785 4897 scope.go:117] "RemoveContainer" containerID="5f8d52450f822770b15df8239e1fc2f1f0969ce877a18c31cbd32d12a368ea09" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.197987 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29538181-mbx9p"] Feb 28 15:01:00 crc kubenswrapper[4897]: E0228 15:01:00.199258 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfef2bf2-3a6c-4119-8fd2-159efa5e45d1" containerName="oc" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.199281 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfef2bf2-3a6c-4119-8fd2-159efa5e45d1" containerName="oc" Feb 28 15:01:00 crc kubenswrapper[4897]: E0228 15:01:00.199371 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431ff150-604f-491d-84aa-0e5f72c08ee6" containerName="collect-profiles" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.199386 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="431ff150-604f-491d-84aa-0e5f72c08ee6" containerName="collect-profiles" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.199771 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="431ff150-604f-491d-84aa-0e5f72c08ee6" containerName="collect-profiles" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.199795 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfef2bf2-3a6c-4119-8fd2-159efa5e45d1" containerName="oc" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.200989 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.214642 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29538181-mbx9p"] Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.286255 4897 generic.go:334] "Generic (PLEG): container finished" podID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerID="747af93356d4737fbde78d20ea726e0bc5e1960bc3e1ec3996ff9ed3d14d14a5" exitCode=0 Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.286338 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" event={"ID":"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677","Type":"ContainerDied","Data":"747af93356d4737fbde78d20ea726e0bc5e1960bc3e1ec3996ff9ed3d14d14a5"} Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.286996 4897 scope.go:117] "RemoveContainer" containerID="747af93356d4737fbde78d20ea726e0bc5e1960bc3e1ec3996ff9ed3d14d14a5" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.345517 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-combined-ca-bundle\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.345811 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-fernet-keys\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.345960 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlj8h\" (UniqueName: \"kubernetes.io/projected/212f33db-61b0-45a1-ac8e-a925bf9eced2-kube-api-access-hlj8h\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.346067 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-config-data\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.389433 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p6l9t_must-gather-gh5xr_2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677/gather/0.log" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.448708 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-config-data\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.448961 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-combined-ca-bundle\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.449039 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-fernet-keys\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.449126 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlj8h\" (UniqueName: \"kubernetes.io/projected/212f33db-61b0-45a1-ac8e-a925bf9eced2-kube-api-access-hlj8h\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.457291 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-combined-ca-bundle\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.457687 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-fernet-keys\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.459056 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-config-data\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.492208 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlj8h\" (UniqueName: \"kubernetes.io/projected/212f33db-61b0-45a1-ac8e-a925bf9eced2-kube-api-access-hlj8h\") pod \"keystone-cron-29538181-mbx9p\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:00 crc kubenswrapper[4897]: I0228 15:01:00.565385 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:01 crc kubenswrapper[4897]: I0228 15:01:01.038552 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29538181-mbx9p"] Feb 28 15:01:01 crc kubenswrapper[4897]: W0228 15:01:01.045572 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod212f33db_61b0_45a1_ac8e_a925bf9eced2.slice/crio-693a91baaf7e81cadc33d8ecc994432ecfe8ed3fa62380c72ee3bbffe48818a6 WatchSource:0}: Error finding container 693a91baaf7e81cadc33d8ecc994432ecfe8ed3fa62380c72ee3bbffe48818a6: Status 404 returned error can't find the container with id 693a91baaf7e81cadc33d8ecc994432ecfe8ed3fa62380c72ee3bbffe48818a6 Feb 28 15:01:01 crc kubenswrapper[4897]: I0228 15:01:01.298748 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29538181-mbx9p" event={"ID":"212f33db-61b0-45a1-ac8e-a925bf9eced2","Type":"ContainerStarted","Data":"23122c30b888895f227037ebcc861bf75c988cc6b0c0e2aaa9cc95796e668246"} Feb 28 15:01:01 crc kubenswrapper[4897]: I0228 15:01:01.298796 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29538181-mbx9p" event={"ID":"212f33db-61b0-45a1-ac8e-a925bf9eced2","Type":"ContainerStarted","Data":"693a91baaf7e81cadc33d8ecc994432ecfe8ed3fa62380c72ee3bbffe48818a6"} Feb 28 15:01:06 crc kubenswrapper[4897]: I0228 15:01:06.377342 4897 generic.go:334] "Generic (PLEG): container finished" podID="212f33db-61b0-45a1-ac8e-a925bf9eced2" containerID="23122c30b888895f227037ebcc861bf75c988cc6b0c0e2aaa9cc95796e668246" exitCode=0 Feb 28 15:01:06 crc kubenswrapper[4897]: I0228 15:01:06.378142 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29538181-mbx9p" event={"ID":"212f33db-61b0-45a1-ac8e-a925bf9eced2","Type":"ContainerDied","Data":"23122c30b888895f227037ebcc861bf75c988cc6b0c0e2aaa9cc95796e668246"} Feb 28 15:01:07 crc kubenswrapper[4897]: I0228 15:01:07.885037 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.025489 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-fernet-keys\") pod \"212f33db-61b0-45a1-ac8e-a925bf9eced2\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.025619 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-config-data\") pod \"212f33db-61b0-45a1-ac8e-a925bf9eced2\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.025643 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-combined-ca-bundle\") pod \"212f33db-61b0-45a1-ac8e-a925bf9eced2\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.025846 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlj8h\" (UniqueName: \"kubernetes.io/projected/212f33db-61b0-45a1-ac8e-a925bf9eced2-kube-api-access-hlj8h\") pod \"212f33db-61b0-45a1-ac8e-a925bf9eced2\" (UID: \"212f33db-61b0-45a1-ac8e-a925bf9eced2\") " Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.035197 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/212f33db-61b0-45a1-ac8e-a925bf9eced2-kube-api-access-hlj8h" (OuterVolumeSpecName: "kube-api-access-hlj8h") pod "212f33db-61b0-45a1-ac8e-a925bf9eced2" (UID: "212f33db-61b0-45a1-ac8e-a925bf9eced2"). InnerVolumeSpecName "kube-api-access-hlj8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.042469 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "212f33db-61b0-45a1-ac8e-a925bf9eced2" (UID: "212f33db-61b0-45a1-ac8e-a925bf9eced2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.071678 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "212f33db-61b0-45a1-ac8e-a925bf9eced2" (UID: "212f33db-61b0-45a1-ac8e-a925bf9eced2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.096483 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-config-data" (OuterVolumeSpecName: "config-data") pod "212f33db-61b0-45a1-ac8e-a925bf9eced2" (UID: "212f33db-61b0-45a1-ac8e-a925bf9eced2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.128472 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlj8h\" (UniqueName: \"kubernetes.io/projected/212f33db-61b0-45a1-ac8e-a925bf9eced2-kube-api-access-hlj8h\") on node \"crc\" DevicePath \"\"" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.128504 4897 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.128514 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.128523 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212f33db-61b0-45a1-ac8e-a925bf9eced2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.408149 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29538181-mbx9p" event={"ID":"212f33db-61b0-45a1-ac8e-a925bf9eced2","Type":"ContainerDied","Data":"693a91baaf7e81cadc33d8ecc994432ecfe8ed3fa62380c72ee3bbffe48818a6"} Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.408635 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="693a91baaf7e81cadc33d8ecc994432ecfe8ed3fa62380c72ee3bbffe48818a6" Feb 28 15:01:08 crc kubenswrapper[4897]: I0228 15:01:08.408745 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29538181-mbx9p" Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.109409 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p6l9t/must-gather-gh5xr"] Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.109775 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" podUID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerName="copy" containerID="cri-o://3ff63bfe4db0f14cc4370ac8a7aa82162a3bfca638ea2f26cce5547ec01d1a59" gracePeriod=2 Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.120285 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p6l9t/must-gather-gh5xr"] Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.421370 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p6l9t_must-gather-gh5xr_2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677/copy/0.log" Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.422363 4897 generic.go:334] "Generic (PLEG): container finished" podID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerID="3ff63bfe4db0f14cc4370ac8a7aa82162a3bfca638ea2f26cce5547ec01d1a59" exitCode=143 Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.564788 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p6l9t_must-gather-gh5xr_2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677/copy/0.log" Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.565578 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.665981 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsl6f\" (UniqueName: \"kubernetes.io/projected/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-kube-api-access-vsl6f\") pod \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\" (UID: \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\") " Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.666050 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-must-gather-output\") pod \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\" (UID: \"2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677\") " Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.671964 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-kube-api-access-vsl6f" (OuterVolumeSpecName: "kube-api-access-vsl6f") pod "2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" (UID: "2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677"). InnerVolumeSpecName "kube-api-access-vsl6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.768909 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsl6f\" (UniqueName: \"kubernetes.io/projected/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-kube-api-access-vsl6f\") on node \"crc\" DevicePath \"\"" Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.863989 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" (UID: "2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:01:09 crc kubenswrapper[4897]: I0228 15:01:09.871020 4897 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 28 15:01:10 crc kubenswrapper[4897]: I0228 15:01:10.467888 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p6l9t_must-gather-gh5xr_2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677/copy/0.log" Feb 28 15:01:10 crc kubenswrapper[4897]: I0228 15:01:10.482821 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6l9t/must-gather-gh5xr" Feb 28 15:01:10 crc kubenswrapper[4897]: I0228 15:01:10.501709 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" path="/var/lib/kubelet/pods/2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677/volumes" Feb 28 15:01:10 crc kubenswrapper[4897]: I0228 15:01:10.503364 4897 scope.go:117] "RemoveContainer" containerID="3ff63bfe4db0f14cc4370ac8a7aa82162a3bfca638ea2f26cce5547ec01d1a59" Feb 28 15:01:10 crc kubenswrapper[4897]: I0228 15:01:10.553799 4897 scope.go:117] "RemoveContainer" containerID="747af93356d4737fbde78d20ea726e0bc5e1960bc3e1ec3996ff9ed3d14d14a5" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.761612 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-52k6v"] Feb 28 15:01:27 crc kubenswrapper[4897]: E0228 15:01:27.763018 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerName="copy" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.763040 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerName="copy" Feb 28 15:01:27 crc kubenswrapper[4897]: E0228 15:01:27.763076 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerName="gather" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.763091 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerName="gather" Feb 28 15:01:27 crc kubenswrapper[4897]: E0228 15:01:27.763124 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212f33db-61b0-45a1-ac8e-a925bf9eced2" containerName="keystone-cron" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.763136 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="212f33db-61b0-45a1-ac8e-a925bf9eced2" containerName="keystone-cron" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.763525 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="212f33db-61b0-45a1-ac8e-a925bf9eced2" containerName="keystone-cron" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.763578 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerName="gather" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.763610 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eeb7c1d-7d5e-4122-ad7d-4945c2a6a677" containerName="copy" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.766188 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.777227 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52k6v"] Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.940340 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-catalog-content\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.940476 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-utilities\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:27 crc kubenswrapper[4897]: I0228 15:01:27.940510 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8xp\" (UniqueName: \"kubernetes.io/projected/21131094-5365-4364-9d7b-9f21d6c14da0-kube-api-access-bs8xp\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.042451 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-catalog-content\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.042616 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-utilities\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.042973 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-catalog-content\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.043005 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-utilities\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.042902 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8xp\" (UniqueName: \"kubernetes.io/projected/21131094-5365-4364-9d7b-9f21d6c14da0-kube-api-access-bs8xp\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.072404 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8xp\" (UniqueName: \"kubernetes.io/projected/21131094-5365-4364-9d7b-9f21d6c14da0-kube-api-access-bs8xp\") pod \"certified-operators-52k6v\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.136521 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.629648 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52k6v"] Feb 28 15:01:28 crc kubenswrapper[4897]: I0228 15:01:28.701916 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52k6v" event={"ID":"21131094-5365-4364-9d7b-9f21d6c14da0","Type":"ContainerStarted","Data":"3499bb356a2d1df346df1bc10965e50330ab2256901ba1cb9c05350af7ae0215"} Feb 28 15:01:29 crc kubenswrapper[4897]: I0228 15:01:29.714356 4897 generic.go:334] "Generic (PLEG): container finished" podID="21131094-5365-4364-9d7b-9f21d6c14da0" containerID="cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289" exitCode=0 Feb 28 15:01:29 crc kubenswrapper[4897]: I0228 15:01:29.714442 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52k6v" event={"ID":"21131094-5365-4364-9d7b-9f21d6c14da0","Type":"ContainerDied","Data":"cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289"} Feb 28 15:01:29 crc kubenswrapper[4897]: I0228 15:01:29.717202 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 15:01:30 crc kubenswrapper[4897]: E0228 15:01:30.382559 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 15:01:30 crc kubenswrapper[4897]: E0228 15:01:30.382799 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs8xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-52k6v_openshift-marketplace(21131094-5365-4364-9d7b-9f21d6c14da0): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 15:01:30 crc kubenswrapper[4897]: E0228 15:01:30.384124 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-52k6v" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" Feb 28 15:01:30 crc kubenswrapper[4897]: E0228 15:01:30.727697 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-52k6v" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" Feb 28 15:01:43 crc kubenswrapper[4897]: E0228 15:01:43.067368 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 15:01:43 crc kubenswrapper[4897]: E0228 15:01:43.068391 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs8xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-52k6v_openshift-marketplace(21131094-5365-4364-9d7b-9f21d6c14da0): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 28 15:01:43 crc kubenswrapper[4897]: E0228 15:01:43.069652 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=c649bff3a95aba662bcce2caae28c3708550f8f13ed49f0891408168e71420e7/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-52k6v" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" Feb 28 15:01:54 crc kubenswrapper[4897]: I0228 15:01:54.338026 4897 scope.go:117] "RemoveContainer" containerID="1e5d6bfe0fa72c614c7807d97e46018ec634424a23b377e055d4210cfd59e814" Feb 28 15:01:57 crc kubenswrapper[4897]: E0228 15:01:57.461746 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-52k6v" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.172071 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538182-v7drs"] Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.175014 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538182-v7drs" Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.187952 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538182-v7drs"] Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.199360 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.199631 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.199937 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.363890 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkkl\" (UniqueName: \"kubernetes.io/projected/8127f512-c9ca-4dd4-83a8-ecc16e229187-kube-api-access-5tkkl\") pod \"auto-csr-approver-29538182-v7drs\" (UID: \"8127f512-c9ca-4dd4-83a8-ecc16e229187\") " pod="openshift-infra/auto-csr-approver-29538182-v7drs" Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.474034 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tkkl\" (UniqueName: \"kubernetes.io/projected/8127f512-c9ca-4dd4-83a8-ecc16e229187-kube-api-access-5tkkl\") pod \"auto-csr-approver-29538182-v7drs\" (UID: \"8127f512-c9ca-4dd4-83a8-ecc16e229187\") " pod="openshift-infra/auto-csr-approver-29538182-v7drs" Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.511109 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tkkl\" (UniqueName: \"kubernetes.io/projected/8127f512-c9ca-4dd4-83a8-ecc16e229187-kube-api-access-5tkkl\") pod \"auto-csr-approver-29538182-v7drs\" (UID: \"8127f512-c9ca-4dd4-83a8-ecc16e229187\") " pod="openshift-infra/auto-csr-approver-29538182-v7drs" Feb 28 15:02:00 crc kubenswrapper[4897]: I0228 15:02:00.526885 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538182-v7drs" Feb 28 15:02:01 crc kubenswrapper[4897]: I0228 15:02:01.113637 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538182-v7drs"] Feb 28 15:02:02 crc kubenswrapper[4897]: E0228 15:02:02.054280 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 15:02:02 crc kubenswrapper[4897]: E0228 15:02:02.054455 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 15:02:02 crc kubenswrapper[4897]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 15:02:02 crc kubenswrapper[4897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tkkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29538182-v7drs_openshift-infra(8127f512-c9ca-4dd4-83a8-ecc16e229187): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 28 15:02:02 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 28 15:02:02 crc kubenswrapper[4897]: E0228 15:02:02.055742 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29538182-v7drs" podUID="8127f512-c9ca-4dd4-83a8-ecc16e229187" Feb 28 15:02:02 crc kubenswrapper[4897]: I0228 15:02:02.114071 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538182-v7drs" event={"ID":"8127f512-c9ca-4dd4-83a8-ecc16e229187","Type":"ContainerStarted","Data":"bdc76842e9f2fc024961131d8f518d2beda0dbecb18892444fa9d8b5246dc72f"} Feb 28 15:02:02 crc kubenswrapper[4897]: E0228 15:02:02.116409 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538182-v7drs" podUID="8127f512-c9ca-4dd4-83a8-ecc16e229187" Feb 28 15:02:03 crc kubenswrapper[4897]: E0228 15:02:03.127148 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29538182-v7drs" podUID="8127f512-c9ca-4dd4-83a8-ecc16e229187" Feb 28 15:02:14 crc kubenswrapper[4897]: I0228 15:02:14.258533 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52k6v" event={"ID":"21131094-5365-4364-9d7b-9f21d6c14da0","Type":"ContainerStarted","Data":"c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624"} Feb 28 15:02:15 crc kubenswrapper[4897]: I0228 15:02:15.274802 4897 generic.go:334] "Generic (PLEG): container finished" podID="21131094-5365-4364-9d7b-9f21d6c14da0" containerID="c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624" exitCode=0 Feb 28 15:02:15 crc kubenswrapper[4897]: I0228 15:02:15.274862 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52k6v" event={"ID":"21131094-5365-4364-9d7b-9f21d6c14da0","Type":"ContainerDied","Data":"c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624"} Feb 28 15:02:16 crc kubenswrapper[4897]: I0228 15:02:16.315649 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52k6v" event={"ID":"21131094-5365-4364-9d7b-9f21d6c14da0","Type":"ContainerStarted","Data":"c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184"} Feb 28 15:02:16 crc kubenswrapper[4897]: I0228 15:02:16.359709 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-52k6v" podStartSLOduration=3.362854074 podStartE2EDuration="49.359688458s" podCreationTimestamp="2026-02-28 15:01:27 +0000 UTC" firstStartedPulling="2026-02-28 15:01:29.716908729 +0000 UTC m=+6303.959229386" lastFinishedPulling="2026-02-28 15:02:15.713743083 +0000 UTC m=+6349.956063770" observedRunningTime="2026-02-28 15:02:16.343350106 +0000 UTC m=+6350.585670813" watchObservedRunningTime="2026-02-28 15:02:16.359688458 +0000 UTC m=+6350.602009125" Feb 28 15:02:17 crc kubenswrapper[4897]: I0228 15:02:17.332343 4897 generic.go:334] "Generic (PLEG): container finished" podID="8127f512-c9ca-4dd4-83a8-ecc16e229187" containerID="15cf924822b4b196d28d1b6eeaf02690a8ceee4a21b5190aa6e349a22bcd5a00" exitCode=0 Feb 28 15:02:17 crc kubenswrapper[4897]: I0228 15:02:17.332374 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538182-v7drs" event={"ID":"8127f512-c9ca-4dd4-83a8-ecc16e229187","Type":"ContainerDied","Data":"15cf924822b4b196d28d1b6eeaf02690a8ceee4a21b5190aa6e349a22bcd5a00"} Feb 28 15:02:18 crc kubenswrapper[4897]: I0228 15:02:18.137515 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:02:18 crc kubenswrapper[4897]: I0228 15:02:18.139963 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:02:18 crc kubenswrapper[4897]: I0228 15:02:18.736870 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538182-v7drs" Feb 28 15:02:18 crc kubenswrapper[4897]: I0228 15:02:18.900702 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tkkl\" (UniqueName: \"kubernetes.io/projected/8127f512-c9ca-4dd4-83a8-ecc16e229187-kube-api-access-5tkkl\") pod \"8127f512-c9ca-4dd4-83a8-ecc16e229187\" (UID: \"8127f512-c9ca-4dd4-83a8-ecc16e229187\") " Feb 28 15:02:18 crc kubenswrapper[4897]: I0228 15:02:18.917658 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8127f512-c9ca-4dd4-83a8-ecc16e229187-kube-api-access-5tkkl" (OuterVolumeSpecName: "kube-api-access-5tkkl") pod "8127f512-c9ca-4dd4-83a8-ecc16e229187" (UID: "8127f512-c9ca-4dd4-83a8-ecc16e229187"). InnerVolumeSpecName "kube-api-access-5tkkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:02:19 crc kubenswrapper[4897]: I0228 15:02:19.003180 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tkkl\" (UniqueName: \"kubernetes.io/projected/8127f512-c9ca-4dd4-83a8-ecc16e229187-kube-api-access-5tkkl\") on node \"crc\" DevicePath \"\"" Feb 28 15:02:19 crc kubenswrapper[4897]: I0228 15:02:19.230498 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-52k6v" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="registry-server" probeResult="failure" output=< Feb 28 15:02:19 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 15:02:19 crc kubenswrapper[4897]: > Feb 28 15:02:19 crc kubenswrapper[4897]: I0228 15:02:19.355437 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538182-v7drs" event={"ID":"8127f512-c9ca-4dd4-83a8-ecc16e229187","Type":"ContainerDied","Data":"bdc76842e9f2fc024961131d8f518d2beda0dbecb18892444fa9d8b5246dc72f"} Feb 28 15:02:19 crc kubenswrapper[4897]: I0228 15:02:19.355774 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc76842e9f2fc024961131d8f518d2beda0dbecb18892444fa9d8b5246dc72f" Feb 28 15:02:19 crc kubenswrapper[4897]: I0228 15:02:19.355527 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538182-v7drs" Feb 28 15:02:19 crc kubenswrapper[4897]: I0228 15:02:19.856423 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538176-c6b7s"] Feb 28 15:02:19 crc kubenswrapper[4897]: I0228 15:02:19.868090 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538176-c6b7s"] Feb 28 15:02:20 crc kubenswrapper[4897]: I0228 15:02:20.476541 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614e76f9-7f4a-4075-bb88-e9b651ef700f" path="/var/lib/kubelet/pods/614e76f9-7f4a-4075-bb88-e9b651ef700f/volumes" Feb 28 15:02:28 crc kubenswrapper[4897]: I0228 15:02:28.216990 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:02:28 crc kubenswrapper[4897]: I0228 15:02:28.288680 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:02:28 crc kubenswrapper[4897]: I0228 15:02:28.979718 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52k6v"] Feb 28 15:02:29 crc kubenswrapper[4897]: I0228 15:02:29.481754 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-52k6v" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="registry-server" containerID="cri-o://c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184" gracePeriod=2 Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.009718 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.081752 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-catalog-content\") pod \"21131094-5365-4364-9d7b-9f21d6c14da0\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.081953 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs8xp\" (UniqueName: \"kubernetes.io/projected/21131094-5365-4364-9d7b-9f21d6c14da0-kube-api-access-bs8xp\") pod \"21131094-5365-4364-9d7b-9f21d6c14da0\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.082066 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-utilities\") pod \"21131094-5365-4364-9d7b-9f21d6c14da0\" (UID: \"21131094-5365-4364-9d7b-9f21d6c14da0\") " Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.083031 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-utilities" (OuterVolumeSpecName: "utilities") pod "21131094-5365-4364-9d7b-9f21d6c14da0" (UID: "21131094-5365-4364-9d7b-9f21d6c14da0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.090369 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21131094-5365-4364-9d7b-9f21d6c14da0-kube-api-access-bs8xp" (OuterVolumeSpecName: "kube-api-access-bs8xp") pod "21131094-5365-4364-9d7b-9f21d6c14da0" (UID: "21131094-5365-4364-9d7b-9f21d6c14da0"). InnerVolumeSpecName "kube-api-access-bs8xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.144545 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21131094-5365-4364-9d7b-9f21d6c14da0" (UID: "21131094-5365-4364-9d7b-9f21d6c14da0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.184553 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs8xp\" (UniqueName: \"kubernetes.io/projected/21131094-5365-4364-9d7b-9f21d6c14da0-kube-api-access-bs8xp\") on node \"crc\" DevicePath \"\"" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.184580 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.184589 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21131094-5365-4364-9d7b-9f21d6c14da0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.506361 4897 generic.go:334] "Generic (PLEG): container finished" podID="21131094-5365-4364-9d7b-9f21d6c14da0" containerID="c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184" exitCode=0 Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.506412 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52k6v" event={"ID":"21131094-5365-4364-9d7b-9f21d6c14da0","Type":"ContainerDied","Data":"c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184"} Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.506447 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52k6v" event={"ID":"21131094-5365-4364-9d7b-9f21d6c14da0","Type":"ContainerDied","Data":"3499bb356a2d1df346df1bc10965e50330ab2256901ba1cb9c05350af7ae0215"} Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.506469 4897 scope.go:117] "RemoveContainer" containerID="c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.506469 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52k6v" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.549866 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52k6v"] Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.553063 4897 scope.go:117] "RemoveContainer" containerID="c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.576978 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-52k6v"] Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.598631 4897 scope.go:117] "RemoveContainer" containerID="cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.646633 4897 scope.go:117] "RemoveContainer" containerID="c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184" Feb 28 15:02:30 crc kubenswrapper[4897]: E0228 15:02:30.647332 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184\": container with ID starting with c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184 not found: ID does not exist" containerID="c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.647378 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184"} err="failed to get container status \"c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184\": rpc error: code = NotFound desc = could not find container \"c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184\": container with ID starting with c6a5904e021ee7c8f59f76ed7b29d626581094e5b22e17d7c13f92da91c1d184 not found: ID does not exist" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.647413 4897 scope.go:117] "RemoveContainer" containerID="c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624" Feb 28 15:02:30 crc kubenswrapper[4897]: E0228 15:02:30.648064 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624\": container with ID starting with c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624 not found: ID does not exist" containerID="c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.648151 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624"} err="failed to get container status \"c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624\": rpc error: code = NotFound desc = could not find container \"c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624\": container with ID starting with c351cde252954fe556b00d116cefbcbc798f8d2843b5883b0a40f4f3e3325624 not found: ID does not exist" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.648197 4897 scope.go:117] "RemoveContainer" containerID="cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289" Feb 28 15:02:30 crc kubenswrapper[4897]: E0228 15:02:30.648752 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289\": container with ID starting with cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289 not found: ID does not exist" containerID="cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289" Feb 28 15:02:30 crc kubenswrapper[4897]: I0228 15:02:30.648827 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289"} err="failed to get container status \"cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289\": rpc error: code = NotFound desc = could not find container \"cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289\": container with ID starting with cea90308e87851ebacc45c2cde2b36cd25336fa046747b2c1c848c9ade674289 not found: ID does not exist" Feb 28 15:02:32 crc kubenswrapper[4897]: I0228 15:02:32.481039 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" path="/var/lib/kubelet/pods/21131094-5365-4364-9d7b-9f21d6c14da0/volumes" Feb 28 15:02:33 crc kubenswrapper[4897]: I0228 15:02:33.371254 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:02:33 crc kubenswrapper[4897]: I0228 15:02:33.371627 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.769374 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t5ddl"] Feb 28 15:02:42 crc kubenswrapper[4897]: E0228 15:02:42.770609 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="extract-utilities" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.770635 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="extract-utilities" Feb 28 15:02:42 crc kubenswrapper[4897]: E0228 15:02:42.770658 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="extract-content" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.770669 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="extract-content" Feb 28 15:02:42 crc kubenswrapper[4897]: E0228 15:02:42.770726 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="registry-server" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.770739 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="registry-server" Feb 28 15:02:42 crc kubenswrapper[4897]: E0228 15:02:42.770764 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8127f512-c9ca-4dd4-83a8-ecc16e229187" containerName="oc" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.770776 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8127f512-c9ca-4dd4-83a8-ecc16e229187" containerName="oc" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.771133 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8127f512-c9ca-4dd4-83a8-ecc16e229187" containerName="oc" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.771176 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="21131094-5365-4364-9d7b-9f21d6c14da0" containerName="registry-server" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.774736 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.777470 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5ddl"] Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.777688 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97wvq\" (UniqueName: \"kubernetes.io/projected/97809262-e40a-4e71-968a-37207fa06ebd-kube-api-access-97wvq\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.777756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-catalog-content\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.777896 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-utilities\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.880104 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-utilities\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.880560 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97wvq\" (UniqueName: \"kubernetes.io/projected/97809262-e40a-4e71-968a-37207fa06ebd-kube-api-access-97wvq\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.880588 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-catalog-content\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.880661 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-utilities\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.880933 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-catalog-content\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:42 crc kubenswrapper[4897]: I0228 15:02:42.908465 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97wvq\" (UniqueName: \"kubernetes.io/projected/97809262-e40a-4e71-968a-37207fa06ebd-kube-api-access-97wvq\") pod \"redhat-marketplace-t5ddl\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:43 crc kubenswrapper[4897]: I0228 15:02:43.109216 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:43 crc kubenswrapper[4897]: I0228 15:02:43.596531 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5ddl"] Feb 28 15:02:43 crc kubenswrapper[4897]: I0228 15:02:43.686922 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5ddl" event={"ID":"97809262-e40a-4e71-968a-37207fa06ebd","Type":"ContainerStarted","Data":"713be09a7a5b438d71a2307bb5f80563dbfa86758e8c32ad5284e3993904cb91"} Feb 28 15:02:44 crc kubenswrapper[4897]: I0228 15:02:44.702672 4897 generic.go:334] "Generic (PLEG): container finished" podID="97809262-e40a-4e71-968a-37207fa06ebd" containerID="7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe" exitCode=0 Feb 28 15:02:44 crc kubenswrapper[4897]: I0228 15:02:44.702782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5ddl" event={"ID":"97809262-e40a-4e71-968a-37207fa06ebd","Type":"ContainerDied","Data":"7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe"} Feb 28 15:02:45 crc kubenswrapper[4897]: I0228 15:02:45.717417 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5ddl" event={"ID":"97809262-e40a-4e71-968a-37207fa06ebd","Type":"ContainerStarted","Data":"832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5"} Feb 28 15:02:46 crc kubenswrapper[4897]: I0228 15:02:46.729668 4897 generic.go:334] "Generic (PLEG): container finished" podID="97809262-e40a-4e71-968a-37207fa06ebd" containerID="832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5" exitCode=0 Feb 28 15:02:46 crc kubenswrapper[4897]: I0228 15:02:46.729769 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5ddl" event={"ID":"97809262-e40a-4e71-968a-37207fa06ebd","Type":"ContainerDied","Data":"832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5"} Feb 28 15:02:47 crc kubenswrapper[4897]: I0228 15:02:47.741618 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5ddl" event={"ID":"97809262-e40a-4e71-968a-37207fa06ebd","Type":"ContainerStarted","Data":"36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703"} Feb 28 15:02:47 crc kubenswrapper[4897]: I0228 15:02:47.777610 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t5ddl" podStartSLOduration=3.3564585989999998 podStartE2EDuration="5.777592479s" podCreationTimestamp="2026-02-28 15:02:42 +0000 UTC" firstStartedPulling="2026-02-28 15:02:44.706555691 +0000 UTC m=+6378.948876368" lastFinishedPulling="2026-02-28 15:02:47.127689551 +0000 UTC m=+6381.370010248" observedRunningTime="2026-02-28 15:02:47.768654135 +0000 UTC m=+6382.010974812" watchObservedRunningTime="2026-02-28 15:02:47.777592479 +0000 UTC m=+6382.019913136" Feb 28 15:02:53 crc kubenswrapper[4897]: I0228 15:02:53.109641 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:53 crc kubenswrapper[4897]: I0228 15:02:53.110236 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:53 crc kubenswrapper[4897]: I0228 15:02:53.173655 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:53 crc kubenswrapper[4897]: I0228 15:02:53.859222 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:53 crc kubenswrapper[4897]: I0228 15:02:53.916995 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5ddl"] Feb 28 15:02:54 crc kubenswrapper[4897]: I0228 15:02:54.448633 4897 scope.go:117] "RemoveContainer" containerID="6b472618cc7b552285c9370fd80e91df8f9719ae11f28b38f514db4b79b625ee" Feb 28 15:02:55 crc kubenswrapper[4897]: I0228 15:02:55.823490 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t5ddl" podUID="97809262-e40a-4e71-968a-37207fa06ebd" containerName="registry-server" containerID="cri-o://36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703" gracePeriod=2 Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.404625 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.490456 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-utilities\") pod \"97809262-e40a-4e71-968a-37207fa06ebd\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.490605 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97wvq\" (UniqueName: \"kubernetes.io/projected/97809262-e40a-4e71-968a-37207fa06ebd-kube-api-access-97wvq\") pod \"97809262-e40a-4e71-968a-37207fa06ebd\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.490667 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-catalog-content\") pod \"97809262-e40a-4e71-968a-37207fa06ebd\" (UID: \"97809262-e40a-4e71-968a-37207fa06ebd\") " Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.491844 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-utilities" (OuterVolumeSpecName: "utilities") pod "97809262-e40a-4e71-968a-37207fa06ebd" (UID: "97809262-e40a-4e71-968a-37207fa06ebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.492255 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.513959 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97809262-e40a-4e71-968a-37207fa06ebd-kube-api-access-97wvq" (OuterVolumeSpecName: "kube-api-access-97wvq") pod "97809262-e40a-4e71-968a-37207fa06ebd" (UID: "97809262-e40a-4e71-968a-37207fa06ebd"). InnerVolumeSpecName "kube-api-access-97wvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.535012 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97809262-e40a-4e71-968a-37207fa06ebd" (UID: "97809262-e40a-4e71-968a-37207fa06ebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.596034 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97wvq\" (UniqueName: \"kubernetes.io/projected/97809262-e40a-4e71-968a-37207fa06ebd-kube-api-access-97wvq\") on node \"crc\" DevicePath \"\"" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.596065 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97809262-e40a-4e71-968a-37207fa06ebd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.834645 4897 generic.go:334] "Generic (PLEG): container finished" podID="97809262-e40a-4e71-968a-37207fa06ebd" containerID="36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703" exitCode=0 Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.834693 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5ddl" event={"ID":"97809262-e40a-4e71-968a-37207fa06ebd","Type":"ContainerDied","Data":"36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703"} Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.835003 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5ddl" event={"ID":"97809262-e40a-4e71-968a-37207fa06ebd","Type":"ContainerDied","Data":"713be09a7a5b438d71a2307bb5f80563dbfa86758e8c32ad5284e3993904cb91"} Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.835031 4897 scope.go:117] "RemoveContainer" containerID="36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.834769 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5ddl" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.855244 4897 scope.go:117] "RemoveContainer" containerID="832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.883023 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5ddl"] Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.885233 4897 scope.go:117] "RemoveContainer" containerID="7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.901091 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5ddl"] Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.946557 4897 scope.go:117] "RemoveContainer" containerID="36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703" Feb 28 15:02:56 crc kubenswrapper[4897]: E0228 15:02:56.949778 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703\": container with ID starting with 36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703 not found: ID does not exist" containerID="36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.949862 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703"} err="failed to get container status \"36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703\": rpc error: code = NotFound desc = could not find container \"36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703\": container with ID starting with 36f9801c3dd8d79291992c0f1e43c232a3df21d244d95ce3e3954ed19ecec703 not found: ID does not exist" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.949892 4897 scope.go:117] "RemoveContainer" containerID="832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5" Feb 28 15:02:56 crc kubenswrapper[4897]: E0228 15:02:56.950377 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5\": container with ID starting with 832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5 not found: ID does not exist" containerID="832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.950429 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5"} err="failed to get container status \"832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5\": rpc error: code = NotFound desc = could not find container \"832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5\": container with ID starting with 832194e3b4b2e3663138f01163bc775224689b84730808c00e162b1a137bc9f5 not found: ID does not exist" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.950448 4897 scope.go:117] "RemoveContainer" containerID="7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe" Feb 28 15:02:56 crc kubenswrapper[4897]: E0228 15:02:56.950875 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe\": container with ID starting with 7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe not found: ID does not exist" containerID="7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe" Feb 28 15:02:56 crc kubenswrapper[4897]: I0228 15:02:56.950920 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe"} err="failed to get container status \"7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe\": rpc error: code = NotFound desc = could not find container \"7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe\": container with ID starting with 7aff68aa9bfc33ecaff01b0a8cd941f8e47550e650201099220b8b2e237bc5fe not found: ID does not exist" Feb 28 15:02:58 crc kubenswrapper[4897]: I0228 15:02:58.469437 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97809262-e40a-4e71-968a-37207fa06ebd" path="/var/lib/kubelet/pods/97809262-e40a-4e71-968a-37207fa06ebd/volumes" Feb 28 15:03:03 crc kubenswrapper[4897]: I0228 15:03:03.370618 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:03:03 crc kubenswrapper[4897]: I0228 15:03:03.370993 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:03:33 crc kubenswrapper[4897]: I0228 15:03:33.370891 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:03:33 crc kubenswrapper[4897]: I0228 15:03:33.371537 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:03:33 crc kubenswrapper[4897]: I0228 15:03:33.371601 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 15:03:33 crc kubenswrapper[4897]: I0228 15:03:33.372571 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 15:03:33 crc kubenswrapper[4897]: I0228 15:03:33.372652 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" gracePeriod=600 Feb 28 15:03:33 crc kubenswrapper[4897]: E0228 15:03:33.502527 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:03:34 crc kubenswrapper[4897]: I0228 15:03:34.249553 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" exitCode=0 Feb 28 15:03:34 crc kubenswrapper[4897]: I0228 15:03:34.249645 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899"} Feb 28 15:03:34 crc kubenswrapper[4897]: I0228 15:03:34.249967 4897 scope.go:117] "RemoveContainer" containerID="56f81e527e46340803698674f23d31062429f04e874c4aa8357907f685c83acc" Feb 28 15:03:34 crc kubenswrapper[4897]: I0228 15:03:34.251008 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:03:34 crc kubenswrapper[4897]: E0228 15:03:34.251537 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:03:45 crc kubenswrapper[4897]: I0228 15:03:45.456526 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:03:45 crc kubenswrapper[4897]: E0228 15:03:45.457706 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:03:57 crc kubenswrapper[4897]: I0228 15:03:57.456526 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:03:57 crc kubenswrapper[4897]: E0228 15:03:57.457654 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.189892 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538184-drddn"] Feb 28 15:04:00 crc kubenswrapper[4897]: E0228 15:04:00.190914 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97809262-e40a-4e71-968a-37207fa06ebd" containerName="extract-utilities" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.190936 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="97809262-e40a-4e71-968a-37207fa06ebd" containerName="extract-utilities" Feb 28 15:04:00 crc kubenswrapper[4897]: E0228 15:04:00.190969 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97809262-e40a-4e71-968a-37207fa06ebd" containerName="extract-content" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.190980 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="97809262-e40a-4e71-968a-37207fa06ebd" containerName="extract-content" Feb 28 15:04:00 crc kubenswrapper[4897]: E0228 15:04:00.191023 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97809262-e40a-4e71-968a-37207fa06ebd" containerName="registry-server" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.191034 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="97809262-e40a-4e71-968a-37207fa06ebd" containerName="registry-server" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.191370 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="97809262-e40a-4e71-968a-37207fa06ebd" containerName="registry-server" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.192467 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538184-drddn" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.197154 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.197438 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.203242 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.205806 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538184-drddn"] Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.321820 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p8bl\" (UniqueName: \"kubernetes.io/projected/1f5dec02-18c9-4d9c-8815-b97f620307ac-kube-api-access-8p8bl\") pod \"auto-csr-approver-29538184-drddn\" (UID: \"1f5dec02-18c9-4d9c-8815-b97f620307ac\") " pod="openshift-infra/auto-csr-approver-29538184-drddn" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.423487 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p8bl\" (UniqueName: \"kubernetes.io/projected/1f5dec02-18c9-4d9c-8815-b97f620307ac-kube-api-access-8p8bl\") pod \"auto-csr-approver-29538184-drddn\" (UID: \"1f5dec02-18c9-4d9c-8815-b97f620307ac\") " pod="openshift-infra/auto-csr-approver-29538184-drddn" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.455601 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p8bl\" (UniqueName: \"kubernetes.io/projected/1f5dec02-18c9-4d9c-8815-b97f620307ac-kube-api-access-8p8bl\") pod \"auto-csr-approver-29538184-drddn\" (UID: \"1f5dec02-18c9-4d9c-8815-b97f620307ac\") " pod="openshift-infra/auto-csr-approver-29538184-drddn" Feb 28 15:04:00 crc kubenswrapper[4897]: I0228 15:04:00.525827 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538184-drddn" Feb 28 15:04:01 crc kubenswrapper[4897]: I0228 15:04:01.005248 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538184-drddn"] Feb 28 15:04:01 crc kubenswrapper[4897]: I0228 15:04:01.599626 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538184-drddn" event={"ID":"1f5dec02-18c9-4d9c-8815-b97f620307ac","Type":"ContainerStarted","Data":"abe7df8a5af33a589c705da04c26c1338328a368b7d42a1f4f18ea2a102cf0a9"} Feb 28 15:04:02 crc kubenswrapper[4897]: I0228 15:04:02.618659 4897 generic.go:334] "Generic (PLEG): container finished" podID="1f5dec02-18c9-4d9c-8815-b97f620307ac" containerID="4ae22418f23c48012f4ccd1a552a27201d802cd96319582d1bd7575cd4de6c6b" exitCode=0 Feb 28 15:04:02 crc kubenswrapper[4897]: I0228 15:04:02.618714 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538184-drddn" event={"ID":"1f5dec02-18c9-4d9c-8815-b97f620307ac","Type":"ContainerDied","Data":"4ae22418f23c48012f4ccd1a552a27201d802cd96319582d1bd7575cd4de6c6b"} Feb 28 15:04:04 crc kubenswrapper[4897]: I0228 15:04:04.123623 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538184-drddn" Feb 28 15:04:04 crc kubenswrapper[4897]: I0228 15:04:04.203168 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p8bl\" (UniqueName: \"kubernetes.io/projected/1f5dec02-18c9-4d9c-8815-b97f620307ac-kube-api-access-8p8bl\") pod \"1f5dec02-18c9-4d9c-8815-b97f620307ac\" (UID: \"1f5dec02-18c9-4d9c-8815-b97f620307ac\") " Feb 28 15:04:04 crc kubenswrapper[4897]: I0228 15:04:04.210489 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5dec02-18c9-4d9c-8815-b97f620307ac-kube-api-access-8p8bl" (OuterVolumeSpecName: "kube-api-access-8p8bl") pod "1f5dec02-18c9-4d9c-8815-b97f620307ac" (UID: "1f5dec02-18c9-4d9c-8815-b97f620307ac"). InnerVolumeSpecName "kube-api-access-8p8bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:04:04 crc kubenswrapper[4897]: I0228 15:04:04.306191 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p8bl\" (UniqueName: \"kubernetes.io/projected/1f5dec02-18c9-4d9c-8815-b97f620307ac-kube-api-access-8p8bl\") on node \"crc\" DevicePath \"\"" Feb 28 15:04:04 crc kubenswrapper[4897]: I0228 15:04:04.645442 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538184-drddn" event={"ID":"1f5dec02-18c9-4d9c-8815-b97f620307ac","Type":"ContainerDied","Data":"abe7df8a5af33a589c705da04c26c1338328a368b7d42a1f4f18ea2a102cf0a9"} Feb 28 15:04:04 crc kubenswrapper[4897]: I0228 15:04:04.645488 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe7df8a5af33a589c705da04c26c1338328a368b7d42a1f4f18ea2a102cf0a9" Feb 28 15:04:04 crc kubenswrapper[4897]: I0228 15:04:04.645570 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538184-drddn" Feb 28 15:04:05 crc kubenswrapper[4897]: I0228 15:04:05.204734 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538178-zztsf"] Feb 28 15:04:05 crc kubenswrapper[4897]: I0228 15:04:05.212526 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538178-zztsf"] Feb 28 15:04:06 crc kubenswrapper[4897]: I0228 15:04:06.475217 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300d4721-b640-4652-b2d2-9a3370f74c09" path="/var/lib/kubelet/pods/300d4721-b640-4652-b2d2-9a3370f74c09/volumes" Feb 28 15:04:10 crc kubenswrapper[4897]: I0228 15:04:10.457291 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:04:10 crc kubenswrapper[4897]: E0228 15:04:10.458136 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:04:21 crc kubenswrapper[4897]: I0228 15:04:21.456108 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:04:21 crc kubenswrapper[4897]: E0228 15:04:21.456826 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:04:32 crc kubenswrapper[4897]: I0228 15:04:32.458408 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:04:32 crc kubenswrapper[4897]: E0228 15:04:32.460355 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:04:34 crc kubenswrapper[4897]: I0228 15:04:34.865469 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sxspm/must-gather-gcj7h"] Feb 28 15:04:34 crc kubenswrapper[4897]: E0228 15:04:34.867194 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5dec02-18c9-4d9c-8815-b97f620307ac" containerName="oc" Feb 28 15:04:34 crc kubenswrapper[4897]: I0228 15:04:34.867241 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5dec02-18c9-4d9c-8815-b97f620307ac" containerName="oc" Feb 28 15:04:34 crc kubenswrapper[4897]: I0228 15:04:34.867722 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5dec02-18c9-4d9c-8815-b97f620307ac" containerName="oc" Feb 28 15:04:34 crc kubenswrapper[4897]: I0228 15:04:34.920070 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:04:34 crc kubenswrapper[4897]: I0228 15:04:34.922556 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sxspm/must-gather-gcj7h"] Feb 28 15:04:34 crc kubenswrapper[4897]: I0228 15:04:34.924370 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sxspm"/"default-dockercfg-xmdmt" Feb 28 15:04:34 crc kubenswrapper[4897]: I0228 15:04:34.926371 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sxspm"/"openshift-service-ca.crt" Feb 28 15:04:34 crc kubenswrapper[4897]: I0228 15:04:34.926593 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sxspm"/"kube-root-ca.crt" Feb 28 15:04:35 crc kubenswrapper[4897]: I0228 15:04:35.050170 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzl89\" (UniqueName: \"kubernetes.io/projected/029ca5c7-ae36-4e20-922c-c77b9b423ab9-kube-api-access-tzl89\") pod \"must-gather-gcj7h\" (UID: \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\") " pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:04:35 crc kubenswrapper[4897]: I0228 15:04:35.050373 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/029ca5c7-ae36-4e20-922c-c77b9b423ab9-must-gather-output\") pod \"must-gather-gcj7h\" (UID: \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\") " pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:04:35 crc kubenswrapper[4897]: I0228 15:04:35.152101 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzl89\" (UniqueName: \"kubernetes.io/projected/029ca5c7-ae36-4e20-922c-c77b9b423ab9-kube-api-access-tzl89\") pod \"must-gather-gcj7h\" (UID: \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\") " pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:04:35 crc kubenswrapper[4897]: I0228 15:04:35.152261 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/029ca5c7-ae36-4e20-922c-c77b9b423ab9-must-gather-output\") pod \"must-gather-gcj7h\" (UID: \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\") " pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:04:35 crc kubenswrapper[4897]: I0228 15:04:35.152829 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/029ca5c7-ae36-4e20-922c-c77b9b423ab9-must-gather-output\") pod \"must-gather-gcj7h\" (UID: \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\") " pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:04:35 crc kubenswrapper[4897]: I0228 15:04:35.184480 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzl89\" (UniqueName: \"kubernetes.io/projected/029ca5c7-ae36-4e20-922c-c77b9b423ab9-kube-api-access-tzl89\") pod \"must-gather-gcj7h\" (UID: \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\") " pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:04:35 crc kubenswrapper[4897]: I0228 15:04:35.254329 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:04:35 crc kubenswrapper[4897]: I0228 15:04:35.757070 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sxspm/must-gather-gcj7h"] Feb 28 15:04:36 crc kubenswrapper[4897]: I0228 15:04:36.008053 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/must-gather-gcj7h" event={"ID":"029ca5c7-ae36-4e20-922c-c77b9b423ab9","Type":"ContainerStarted","Data":"a386736bd59066bcd9cbb8e439b06d183d98dbf0f4478b2af9823527323bc087"} Feb 28 15:04:37 crc kubenswrapper[4897]: I0228 15:04:37.018059 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/must-gather-gcj7h" event={"ID":"029ca5c7-ae36-4e20-922c-c77b9b423ab9","Type":"ContainerStarted","Data":"9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755"} Feb 28 15:04:37 crc kubenswrapper[4897]: I0228 15:04:37.018455 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/must-gather-gcj7h" event={"ID":"029ca5c7-ae36-4e20-922c-c77b9b423ab9","Type":"ContainerStarted","Data":"c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5"} Feb 28 15:04:37 crc kubenswrapper[4897]: I0228 15:04:37.042753 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sxspm/must-gather-gcj7h" podStartSLOduration=3.04272223 podStartE2EDuration="3.04272223s" podCreationTimestamp="2026-02-28 15:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 15:04:37.037298216 +0000 UTC m=+6491.279618903" watchObservedRunningTime="2026-02-28 15:04:37.04272223 +0000 UTC m=+6491.285042917" Feb 28 15:04:38 crc kubenswrapper[4897]: E0228 15:04:38.612698 4897 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.164:44732->38.102.83.164:37321: write tcp 38.102.83.164:44732->38.102.83.164:37321: write: broken pipe Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.343057 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sxspm/crc-debug-592hk"] Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.348516 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.467935 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d45776be-cde5-4cdf-bee8-96bf22d9d214-host\") pod \"crc-debug-592hk\" (UID: \"d45776be-cde5-4cdf-bee8-96bf22d9d214\") " pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.468010 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgwx\" (UniqueName: \"kubernetes.io/projected/d45776be-cde5-4cdf-bee8-96bf22d9d214-kube-api-access-vpgwx\") pod \"crc-debug-592hk\" (UID: \"d45776be-cde5-4cdf-bee8-96bf22d9d214\") " pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.575101 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgwx\" (UniqueName: \"kubernetes.io/projected/d45776be-cde5-4cdf-bee8-96bf22d9d214-kube-api-access-vpgwx\") pod \"crc-debug-592hk\" (UID: \"d45776be-cde5-4cdf-bee8-96bf22d9d214\") " pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.575768 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d45776be-cde5-4cdf-bee8-96bf22d9d214-host\") pod \"crc-debug-592hk\" (UID: \"d45776be-cde5-4cdf-bee8-96bf22d9d214\") " pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.577290 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d45776be-cde5-4cdf-bee8-96bf22d9d214-host\") pod \"crc-debug-592hk\" (UID: \"d45776be-cde5-4cdf-bee8-96bf22d9d214\") " pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.599876 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgwx\" (UniqueName: \"kubernetes.io/projected/d45776be-cde5-4cdf-bee8-96bf22d9d214-kube-api-access-vpgwx\") pod \"crc-debug-592hk\" (UID: \"d45776be-cde5-4cdf-bee8-96bf22d9d214\") " pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:04:40 crc kubenswrapper[4897]: I0228 15:04:40.665811 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:04:40 crc kubenswrapper[4897]: W0228 15:04:40.714211 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd45776be_cde5_4cdf_bee8_96bf22d9d214.slice/crio-c9484322c6fd444201de8503ca88ccec8c7a0e8c0e91693439d062e55f0f1c28 WatchSource:0}: Error finding container c9484322c6fd444201de8503ca88ccec8c7a0e8c0e91693439d062e55f0f1c28: Status 404 returned error can't find the container with id c9484322c6fd444201de8503ca88ccec8c7a0e8c0e91693439d062e55f0f1c28 Feb 28 15:04:41 crc kubenswrapper[4897]: I0228 15:04:41.061876 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/crc-debug-592hk" event={"ID":"d45776be-cde5-4cdf-bee8-96bf22d9d214","Type":"ContainerStarted","Data":"14e38091523331975e98db570b82dbbde03f2288690bd0b9befb70f94ad621c5"} Feb 28 15:04:41 crc kubenswrapper[4897]: I0228 15:04:41.062331 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/crc-debug-592hk" event={"ID":"d45776be-cde5-4cdf-bee8-96bf22d9d214","Type":"ContainerStarted","Data":"c9484322c6fd444201de8503ca88ccec8c7a0e8c0e91693439d062e55f0f1c28"} Feb 28 15:04:41 crc kubenswrapper[4897]: I0228 15:04:41.082543 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sxspm/crc-debug-592hk" podStartSLOduration=1.082525241 podStartE2EDuration="1.082525241s" podCreationTimestamp="2026-02-28 15:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 15:04:41.07792839 +0000 UTC m=+6495.320249047" watchObservedRunningTime="2026-02-28 15:04:41.082525241 +0000 UTC m=+6495.324845898" Feb 28 15:04:47 crc kubenswrapper[4897]: I0228 15:04:47.457579 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:04:47 crc kubenswrapper[4897]: E0228 15:04:47.459709 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:04:54 crc kubenswrapper[4897]: I0228 15:04:54.634370 4897 scope.go:117] "RemoveContainer" containerID="76c689ad5fcef6a7fc072bd442576584522db302420fa6d3a59aa7e51f4cf2cd" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.488267 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t52g6"] Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.490590 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.510393 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t52g6"] Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.594226 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-utilities\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.594772 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcv8\" (UniqueName: \"kubernetes.io/projected/746ad6d2-4428-476d-97b2-8848aabc229d-kube-api-access-qvcv8\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.614589 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-catalog-content\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.717953 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvcv8\" (UniqueName: \"kubernetes.io/projected/746ad6d2-4428-476d-97b2-8848aabc229d-kube-api-access-qvcv8\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.719749 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-catalog-content\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.720069 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-utilities\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.720734 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-utilities\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.720795 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-catalog-content\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.737319 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvcv8\" (UniqueName: \"kubernetes.io/projected/746ad6d2-4428-476d-97b2-8848aabc229d-kube-api-access-qvcv8\") pod \"redhat-operators-t52g6\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:55 crc kubenswrapper[4897]: I0228 15:04:55.829820 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:04:56 crc kubenswrapper[4897]: I0228 15:04:56.416618 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t52g6"] Feb 28 15:04:57 crc kubenswrapper[4897]: I0228 15:04:57.212049 4897 generic.go:334] "Generic (PLEG): container finished" podID="746ad6d2-4428-476d-97b2-8848aabc229d" containerID="d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84" exitCode=0 Feb 28 15:04:57 crc kubenswrapper[4897]: I0228 15:04:57.212160 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t52g6" event={"ID":"746ad6d2-4428-476d-97b2-8848aabc229d","Type":"ContainerDied","Data":"d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84"} Feb 28 15:04:57 crc kubenswrapper[4897]: I0228 15:04:57.212486 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t52g6" event={"ID":"746ad6d2-4428-476d-97b2-8848aabc229d","Type":"ContainerStarted","Data":"a9cc68752f4634a9522b67c907514766403b2a9b9c6446b1aece7b278a7aa02a"} Feb 28 15:04:59 crc kubenswrapper[4897]: I0228 15:04:59.234785 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t52g6" event={"ID":"746ad6d2-4428-476d-97b2-8848aabc229d","Type":"ContainerStarted","Data":"e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f"} Feb 28 15:05:01 crc kubenswrapper[4897]: I0228 15:05:01.456447 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:05:01 crc kubenswrapper[4897]: E0228 15:05:01.457103 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:05:04 crc kubenswrapper[4897]: I0228 15:05:04.299823 4897 generic.go:334] "Generic (PLEG): container finished" podID="746ad6d2-4428-476d-97b2-8848aabc229d" containerID="e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f" exitCode=0 Feb 28 15:05:04 crc kubenswrapper[4897]: I0228 15:05:04.299874 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t52g6" event={"ID":"746ad6d2-4428-476d-97b2-8848aabc229d","Type":"ContainerDied","Data":"e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f"} Feb 28 15:05:05 crc kubenswrapper[4897]: I0228 15:05:05.311948 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t52g6" event={"ID":"746ad6d2-4428-476d-97b2-8848aabc229d","Type":"ContainerStarted","Data":"808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246"} Feb 28 15:05:05 crc kubenswrapper[4897]: I0228 15:05:05.334014 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t52g6" podStartSLOduration=2.8134819589999998 podStartE2EDuration="10.333995528s" podCreationTimestamp="2026-02-28 15:04:55 +0000 UTC" firstStartedPulling="2026-02-28 15:04:57.216523472 +0000 UTC m=+6511.458844129" lastFinishedPulling="2026-02-28 15:05:04.737037011 +0000 UTC m=+6518.979357698" observedRunningTime="2026-02-28 15:05:05.3301867 +0000 UTC m=+6519.572507377" watchObservedRunningTime="2026-02-28 15:05:05.333995528 +0000 UTC m=+6519.576316185" Feb 28 15:05:05 crc kubenswrapper[4897]: I0228 15:05:05.830885 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:05:05 crc kubenswrapper[4897]: I0228 15:05:05.831226 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:05:06 crc kubenswrapper[4897]: I0228 15:05:06.886438 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t52g6" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="registry-server" probeResult="failure" output=< Feb 28 15:05:06 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 28 15:05:06 crc kubenswrapper[4897]: > Feb 28 15:05:12 crc kubenswrapper[4897]: I0228 15:05:12.458455 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:05:12 crc kubenswrapper[4897]: E0228 15:05:12.462457 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:05:15 crc kubenswrapper[4897]: I0228 15:05:15.897002 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:05:15 crc kubenswrapper[4897]: I0228 15:05:15.986778 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:05:16 crc kubenswrapper[4897]: I0228 15:05:16.144853 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t52g6"] Feb 28 15:05:17 crc kubenswrapper[4897]: I0228 15:05:17.449635 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t52g6" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="registry-server" containerID="cri-o://808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246" gracePeriod=2 Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.021700 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.198012 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-catalog-content\") pod \"746ad6d2-4428-476d-97b2-8848aabc229d\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.198097 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-utilities\") pod \"746ad6d2-4428-476d-97b2-8848aabc229d\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.198848 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-utilities" (OuterVolumeSpecName: "utilities") pod "746ad6d2-4428-476d-97b2-8848aabc229d" (UID: "746ad6d2-4428-476d-97b2-8848aabc229d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.199000 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvcv8\" (UniqueName: \"kubernetes.io/projected/746ad6d2-4428-476d-97b2-8848aabc229d-kube-api-access-qvcv8\") pod \"746ad6d2-4428-476d-97b2-8848aabc229d\" (UID: \"746ad6d2-4428-476d-97b2-8848aabc229d\") " Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.200103 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.212620 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746ad6d2-4428-476d-97b2-8848aabc229d-kube-api-access-qvcv8" (OuterVolumeSpecName: "kube-api-access-qvcv8") pod "746ad6d2-4428-476d-97b2-8848aabc229d" (UID: "746ad6d2-4428-476d-97b2-8848aabc229d"). InnerVolumeSpecName "kube-api-access-qvcv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.302077 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvcv8\" (UniqueName: \"kubernetes.io/projected/746ad6d2-4428-476d-97b2-8848aabc229d-kube-api-access-qvcv8\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.339731 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "746ad6d2-4428-476d-97b2-8848aabc229d" (UID: "746ad6d2-4428-476d-97b2-8848aabc229d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.405866 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/746ad6d2-4428-476d-97b2-8848aabc229d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.462256 4897 generic.go:334] "Generic (PLEG): container finished" podID="746ad6d2-4428-476d-97b2-8848aabc229d" containerID="808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246" exitCode=0 Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.462521 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t52g6" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.478023 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t52g6" event={"ID":"746ad6d2-4428-476d-97b2-8848aabc229d","Type":"ContainerDied","Data":"808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246"} Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.478060 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t52g6" event={"ID":"746ad6d2-4428-476d-97b2-8848aabc229d","Type":"ContainerDied","Data":"a9cc68752f4634a9522b67c907514766403b2a9b9c6446b1aece7b278a7aa02a"} Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.478078 4897 scope.go:117] "RemoveContainer" containerID="808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.506232 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t52g6"] Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.509144 4897 scope.go:117] "RemoveContainer" containerID="e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.526955 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t52g6"] Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.538745 4897 scope.go:117] "RemoveContainer" containerID="d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.592361 4897 scope.go:117] "RemoveContainer" containerID="808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246" Feb 28 15:05:18 crc kubenswrapper[4897]: E0228 15:05:18.592724 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246\": container with ID starting with 808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246 not found: ID does not exist" containerID="808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.592767 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246"} err="failed to get container status \"808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246\": rpc error: code = NotFound desc = could not find container \"808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246\": container with ID starting with 808eb20cea46108ef9c13a73bd41355a0dfe103faadb39db207a82ef001a4246 not found: ID does not exist" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.592798 4897 scope.go:117] "RemoveContainer" containerID="e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f" Feb 28 15:05:18 crc kubenswrapper[4897]: E0228 15:05:18.593572 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f\": container with ID starting with e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f not found: ID does not exist" containerID="e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.593605 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f"} err="failed to get container status \"e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f\": rpc error: code = NotFound desc = could not find container \"e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f\": container with ID starting with e7c215ae4b5615d9ce77cc333eea145301c48d71a763410d11356b48492ffd4f not found: ID does not exist" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.593630 4897 scope.go:117] "RemoveContainer" containerID="d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84" Feb 28 15:05:18 crc kubenswrapper[4897]: E0228 15:05:18.594108 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84\": container with ID starting with d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84 not found: ID does not exist" containerID="d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84" Feb 28 15:05:18 crc kubenswrapper[4897]: I0228 15:05:18.594134 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84"} err="failed to get container status \"d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84\": rpc error: code = NotFound desc = could not find container \"d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84\": container with ID starting with d9d2ccec6ae4687f00b4bd7271cbf6281a3dfee8ce492102319558136b248c84 not found: ID does not exist" Feb 28 15:05:20 crc kubenswrapper[4897]: I0228 15:05:20.480260 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" path="/var/lib/kubelet/pods/746ad6d2-4428-476d-97b2-8848aabc229d/volumes" Feb 28 15:05:25 crc kubenswrapper[4897]: I0228 15:05:25.456494 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:05:25 crc kubenswrapper[4897]: E0228 15:05:25.457376 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:05:36 crc kubenswrapper[4897]: I0228 15:05:36.658475 4897 generic.go:334] "Generic (PLEG): container finished" podID="d45776be-cde5-4cdf-bee8-96bf22d9d214" containerID="14e38091523331975e98db570b82dbbde03f2288690bd0b9befb70f94ad621c5" exitCode=0 Feb 28 15:05:36 crc kubenswrapper[4897]: I0228 15:05:36.658527 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/crc-debug-592hk" event={"ID":"d45776be-cde5-4cdf-bee8-96bf22d9d214","Type":"ContainerDied","Data":"14e38091523331975e98db570b82dbbde03f2288690bd0b9befb70f94ad621c5"} Feb 28 15:05:37 crc kubenswrapper[4897]: I0228 15:05:37.809866 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:05:37 crc kubenswrapper[4897]: I0228 15:05:37.863411 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sxspm/crc-debug-592hk"] Feb 28 15:05:37 crc kubenswrapper[4897]: I0228 15:05:37.874825 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sxspm/crc-debug-592hk"] Feb 28 15:05:37 crc kubenswrapper[4897]: I0228 15:05:37.972021 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpgwx\" (UniqueName: \"kubernetes.io/projected/d45776be-cde5-4cdf-bee8-96bf22d9d214-kube-api-access-vpgwx\") pod \"d45776be-cde5-4cdf-bee8-96bf22d9d214\" (UID: \"d45776be-cde5-4cdf-bee8-96bf22d9d214\") " Feb 28 15:05:37 crc kubenswrapper[4897]: I0228 15:05:37.972153 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d45776be-cde5-4cdf-bee8-96bf22d9d214-host\") pod \"d45776be-cde5-4cdf-bee8-96bf22d9d214\" (UID: \"d45776be-cde5-4cdf-bee8-96bf22d9d214\") " Feb 28 15:05:37 crc kubenswrapper[4897]: I0228 15:05:37.972270 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45776be-cde5-4cdf-bee8-96bf22d9d214-host" (OuterVolumeSpecName: "host") pod "d45776be-cde5-4cdf-bee8-96bf22d9d214" (UID: "d45776be-cde5-4cdf-bee8-96bf22d9d214"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 15:05:37 crc kubenswrapper[4897]: I0228 15:05:37.972934 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d45776be-cde5-4cdf-bee8-96bf22d9d214-host\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:37 crc kubenswrapper[4897]: I0228 15:05:37.981068 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45776be-cde5-4cdf-bee8-96bf22d9d214-kube-api-access-vpgwx" (OuterVolumeSpecName: "kube-api-access-vpgwx") pod "d45776be-cde5-4cdf-bee8-96bf22d9d214" (UID: "d45776be-cde5-4cdf-bee8-96bf22d9d214"). InnerVolumeSpecName "kube-api-access-vpgwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:05:38 crc kubenswrapper[4897]: I0228 15:05:38.074698 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpgwx\" (UniqueName: \"kubernetes.io/projected/d45776be-cde5-4cdf-bee8-96bf22d9d214-kube-api-access-vpgwx\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:38 crc kubenswrapper[4897]: I0228 15:05:38.466658 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45776be-cde5-4cdf-bee8-96bf22d9d214" path="/var/lib/kubelet/pods/d45776be-cde5-4cdf-bee8-96bf22d9d214/volumes" Feb 28 15:05:38 crc kubenswrapper[4897]: I0228 15:05:38.682254 4897 scope.go:117] "RemoveContainer" containerID="14e38091523331975e98db570b82dbbde03f2288690bd0b9befb70f94ad621c5" Feb 28 15:05:38 crc kubenswrapper[4897]: I0228 15:05:38.682276 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-592hk" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.044016 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sxspm/crc-debug-6865t"] Feb 28 15:05:39 crc kubenswrapper[4897]: E0228 15:05:39.044427 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="extract-utilities" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.044441 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="extract-utilities" Feb 28 15:05:39 crc kubenswrapper[4897]: E0228 15:05:39.044456 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="registry-server" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.044462 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="registry-server" Feb 28 15:05:39 crc kubenswrapper[4897]: E0228 15:05:39.044489 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="extract-content" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.044495 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="extract-content" Feb 28 15:05:39 crc kubenswrapper[4897]: E0228 15:05:39.044514 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45776be-cde5-4cdf-bee8-96bf22d9d214" containerName="container-00" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.044520 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45776be-cde5-4cdf-bee8-96bf22d9d214" containerName="container-00" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.044710 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45776be-cde5-4cdf-bee8-96bf22d9d214" containerName="container-00" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.044725 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="746ad6d2-4428-476d-97b2-8848aabc229d" containerName="registry-server" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.045370 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.193789 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgvks\" (UniqueName: \"kubernetes.io/projected/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-kube-api-access-fgvks\") pod \"crc-debug-6865t\" (UID: \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\") " pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.194296 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-host\") pod \"crc-debug-6865t\" (UID: \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\") " pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.297500 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgvks\" (UniqueName: \"kubernetes.io/projected/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-kube-api-access-fgvks\") pod \"crc-debug-6865t\" (UID: \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\") " pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.297649 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-host\") pod \"crc-debug-6865t\" (UID: \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\") " pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.297872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-host\") pod \"crc-debug-6865t\" (UID: \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\") " pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.324937 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgvks\" (UniqueName: \"kubernetes.io/projected/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-kube-api-access-fgvks\") pod \"crc-debug-6865t\" (UID: \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\") " pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.361856 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.692846 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/crc-debug-6865t" event={"ID":"cda27312-22dc-4b45-b4fe-31c4bdc5cf53","Type":"ContainerStarted","Data":"d55ce1e7d094b74cf3ab45675d5f867dc554d6c32f563e9230d91bddfabd000c"} Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.693361 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/crc-debug-6865t" event={"ID":"cda27312-22dc-4b45-b4fe-31c4bdc5cf53","Type":"ContainerStarted","Data":"d33b51a0e54d60fcce8b495d28a9e28cc184667a2d9b17ee8a0711c9cd42e282"} Feb 28 15:05:39 crc kubenswrapper[4897]: I0228 15:05:39.709424 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sxspm/crc-debug-6865t" podStartSLOduration=0.7094038 podStartE2EDuration="709.4038ms" podCreationTimestamp="2026-02-28 15:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 15:05:39.706493237 +0000 UTC m=+6553.948813904" watchObservedRunningTime="2026-02-28 15:05:39.7094038 +0000 UTC m=+6553.951724467" Feb 28 15:05:40 crc kubenswrapper[4897]: I0228 15:05:40.456010 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:05:40 crc kubenswrapper[4897]: E0228 15:05:40.456251 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:05:40 crc kubenswrapper[4897]: I0228 15:05:40.713398 4897 generic.go:334] "Generic (PLEG): container finished" podID="cda27312-22dc-4b45-b4fe-31c4bdc5cf53" containerID="d55ce1e7d094b74cf3ab45675d5f867dc554d6c32f563e9230d91bddfabd000c" exitCode=0 Feb 28 15:05:40 crc kubenswrapper[4897]: I0228 15:05:40.713447 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/crc-debug-6865t" event={"ID":"cda27312-22dc-4b45-b4fe-31c4bdc5cf53","Type":"ContainerDied","Data":"d55ce1e7d094b74cf3ab45675d5f867dc554d6c32f563e9230d91bddfabd000c"} Feb 28 15:05:41 crc kubenswrapper[4897]: I0228 15:05:41.814056 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:41 crc kubenswrapper[4897]: I0228 15:05:41.944128 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-host\") pod \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\" (UID: \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\") " Feb 28 15:05:41 crc kubenswrapper[4897]: I0228 15:05:41.944241 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgvks\" (UniqueName: \"kubernetes.io/projected/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-kube-api-access-fgvks\") pod \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\" (UID: \"cda27312-22dc-4b45-b4fe-31c4bdc5cf53\") " Feb 28 15:05:41 crc kubenswrapper[4897]: I0228 15:05:41.944364 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-host" (OuterVolumeSpecName: "host") pod "cda27312-22dc-4b45-b4fe-31c4bdc5cf53" (UID: "cda27312-22dc-4b45-b4fe-31c4bdc5cf53"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 15:05:41 crc kubenswrapper[4897]: I0228 15:05:41.944771 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-host\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:41 crc kubenswrapper[4897]: I0228 15:05:41.967687 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-kube-api-access-fgvks" (OuterVolumeSpecName: "kube-api-access-fgvks") pod "cda27312-22dc-4b45-b4fe-31c4bdc5cf53" (UID: "cda27312-22dc-4b45-b4fe-31c4bdc5cf53"). InnerVolumeSpecName "kube-api-access-fgvks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:05:42 crc kubenswrapper[4897]: I0228 15:05:42.045800 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgvks\" (UniqueName: \"kubernetes.io/projected/cda27312-22dc-4b45-b4fe-31c4bdc5cf53-kube-api-access-fgvks\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:42 crc kubenswrapper[4897]: I0228 15:05:42.231850 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sxspm/crc-debug-6865t"] Feb 28 15:05:42 crc kubenswrapper[4897]: I0228 15:05:42.240222 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sxspm/crc-debug-6865t"] Feb 28 15:05:42 crc kubenswrapper[4897]: I0228 15:05:42.477350 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda27312-22dc-4b45-b4fe-31c4bdc5cf53" path="/var/lib/kubelet/pods/cda27312-22dc-4b45-b4fe-31c4bdc5cf53/volumes" Feb 28 15:05:42 crc kubenswrapper[4897]: I0228 15:05:42.738279 4897 scope.go:117] "RemoveContainer" containerID="d55ce1e7d094b74cf3ab45675d5f867dc554d6c32f563e9230d91bddfabd000c" Feb 28 15:05:42 crc kubenswrapper[4897]: I0228 15:05:42.738594 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-6865t" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.411578 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sxspm/crc-debug-rspjh"] Feb 28 15:05:43 crc kubenswrapper[4897]: E0228 15:05:43.412251 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda27312-22dc-4b45-b4fe-31c4bdc5cf53" containerName="container-00" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.412266 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda27312-22dc-4b45-b4fe-31c4bdc5cf53" containerName="container-00" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.412515 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda27312-22dc-4b45-b4fe-31c4bdc5cf53" containerName="container-00" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.413179 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.580919 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-host\") pod \"crc-debug-rspjh\" (UID: \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\") " pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.581165 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4c7\" (UniqueName: \"kubernetes.io/projected/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-kube-api-access-qq4c7\") pod \"crc-debug-rspjh\" (UID: \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\") " pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.683592 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq4c7\" (UniqueName: \"kubernetes.io/projected/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-kube-api-access-qq4c7\") pod \"crc-debug-rspjh\" (UID: \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\") " pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.683793 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-host\") pod \"crc-debug-rspjh\" (UID: \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\") " pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.683950 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-host\") pod \"crc-debug-rspjh\" (UID: \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\") " pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.704040 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq4c7\" (UniqueName: \"kubernetes.io/projected/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-kube-api-access-qq4c7\") pod \"crc-debug-rspjh\" (UID: \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\") " pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:43 crc kubenswrapper[4897]: I0228 15:05:43.739029 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:43 crc kubenswrapper[4897]: W0228 15:05:43.774162 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7665289a_5cbc_42f7_9631_7d59cf9c1bfb.slice/crio-62f9db5684ae456d66b5e7ad49bbab2b7ee3ac059f45ab7f6ad1e4b92163bf9c WatchSource:0}: Error finding container 62f9db5684ae456d66b5e7ad49bbab2b7ee3ac059f45ab7f6ad1e4b92163bf9c: Status 404 returned error can't find the container with id 62f9db5684ae456d66b5e7ad49bbab2b7ee3ac059f45ab7f6ad1e4b92163bf9c Feb 28 15:05:44 crc kubenswrapper[4897]: I0228 15:05:44.765689 4897 generic.go:334] "Generic (PLEG): container finished" podID="7665289a-5cbc-42f7-9631-7d59cf9c1bfb" containerID="b225ab124e01b7159386c37a8e3eccffb25d0fdf335f1310cfd4d106a8bacf95" exitCode=0 Feb 28 15:05:44 crc kubenswrapper[4897]: I0228 15:05:44.765782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/crc-debug-rspjh" event={"ID":"7665289a-5cbc-42f7-9631-7d59cf9c1bfb","Type":"ContainerDied","Data":"b225ab124e01b7159386c37a8e3eccffb25d0fdf335f1310cfd4d106a8bacf95"} Feb 28 15:05:44 crc kubenswrapper[4897]: I0228 15:05:44.766257 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/crc-debug-rspjh" event={"ID":"7665289a-5cbc-42f7-9631-7d59cf9c1bfb","Type":"ContainerStarted","Data":"62f9db5684ae456d66b5e7ad49bbab2b7ee3ac059f45ab7f6ad1e4b92163bf9c"} Feb 28 15:05:44 crc kubenswrapper[4897]: I0228 15:05:44.828467 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sxspm/crc-debug-rspjh"] Feb 28 15:05:44 crc kubenswrapper[4897]: I0228 15:05:44.839760 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sxspm/crc-debug-rspjh"] Feb 28 15:05:45 crc kubenswrapper[4897]: I0228 15:05:45.909999 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.032643 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-host\") pod \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\" (UID: \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\") " Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.032789 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq4c7\" (UniqueName: \"kubernetes.io/projected/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-kube-api-access-qq4c7\") pod \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\" (UID: \"7665289a-5cbc-42f7-9631-7d59cf9c1bfb\") " Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.033082 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-host" (OuterVolumeSpecName: "host") pod "7665289a-5cbc-42f7-9631-7d59cf9c1bfb" (UID: "7665289a-5cbc-42f7-9631-7d59cf9c1bfb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.033897 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-host\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.038145 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-kube-api-access-qq4c7" (OuterVolumeSpecName: "kube-api-access-qq4c7") pod "7665289a-5cbc-42f7-9631-7d59cf9c1bfb" (UID: "7665289a-5cbc-42f7-9631-7d59cf9c1bfb"). InnerVolumeSpecName "kube-api-access-qq4c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.136141 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq4c7\" (UniqueName: \"kubernetes.io/projected/7665289a-5cbc-42f7-9631-7d59cf9c1bfb-kube-api-access-qq4c7\") on node \"crc\" DevicePath \"\"" Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.470928 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7665289a-5cbc-42f7-9631-7d59cf9c1bfb" path="/var/lib/kubelet/pods/7665289a-5cbc-42f7-9631-7d59cf9c1bfb/volumes" Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.807521 4897 scope.go:117] "RemoveContainer" containerID="b225ab124e01b7159386c37a8e3eccffb25d0fdf335f1310cfd4d106a8bacf95" Feb 28 15:05:46 crc kubenswrapper[4897]: I0228 15:05:46.807685 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/crc-debug-rspjh" Feb 28 15:05:55 crc kubenswrapper[4897]: I0228 15:05:55.456524 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:05:55 crc kubenswrapper[4897]: E0228 15:05:55.457414 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.136331 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538186-hjpq4"] Feb 28 15:06:00 crc kubenswrapper[4897]: E0228 15:06:00.137111 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7665289a-5cbc-42f7-9631-7d59cf9c1bfb" containerName="container-00" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.137123 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7665289a-5cbc-42f7-9631-7d59cf9c1bfb" containerName="container-00" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.137319 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7665289a-5cbc-42f7-9631-7d59cf9c1bfb" containerName="container-00" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.137969 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538186-hjpq4" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.143245 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.144377 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.148549 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.157430 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538186-hjpq4"] Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.280488 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqs5\" (UniqueName: \"kubernetes.io/projected/e46b8174-fe65-446e-965a-6786bbefd8ba-kube-api-access-mkqs5\") pod \"auto-csr-approver-29538186-hjpq4\" (UID: \"e46b8174-fe65-446e-965a-6786bbefd8ba\") " pod="openshift-infra/auto-csr-approver-29538186-hjpq4" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.382448 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkqs5\" (UniqueName: \"kubernetes.io/projected/e46b8174-fe65-446e-965a-6786bbefd8ba-kube-api-access-mkqs5\") pod \"auto-csr-approver-29538186-hjpq4\" (UID: \"e46b8174-fe65-446e-965a-6786bbefd8ba\") " pod="openshift-infra/auto-csr-approver-29538186-hjpq4" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.400669 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkqs5\" (UniqueName: \"kubernetes.io/projected/e46b8174-fe65-446e-965a-6786bbefd8ba-kube-api-access-mkqs5\") pod \"auto-csr-approver-29538186-hjpq4\" (UID: \"e46b8174-fe65-446e-965a-6786bbefd8ba\") " pod="openshift-infra/auto-csr-approver-29538186-hjpq4" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.458599 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538186-hjpq4" Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.961507 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538186-hjpq4"] Feb 28 15:06:00 crc kubenswrapper[4897]: I0228 15:06:00.969158 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538186-hjpq4" event={"ID":"e46b8174-fe65-446e-965a-6786bbefd8ba","Type":"ContainerStarted","Data":"fb5c5dbd8588afde7bc0bb8ef22034127327ece3f3c70bd5e9a70dda30271e64"} Feb 28 15:06:02 crc kubenswrapper[4897]: I0228 15:06:02.986876 4897 generic.go:334] "Generic (PLEG): container finished" podID="e46b8174-fe65-446e-965a-6786bbefd8ba" containerID="ed2d3e3fa1853287ebf283ce316efc4f4895270bf54e63700ec9cb7a51e8f3bd" exitCode=0 Feb 28 15:06:02 crc kubenswrapper[4897]: I0228 15:06:02.986981 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538186-hjpq4" event={"ID":"e46b8174-fe65-446e-965a-6786bbefd8ba","Type":"ContainerDied","Data":"ed2d3e3fa1853287ebf283ce316efc4f4895270bf54e63700ec9cb7a51e8f3bd"} Feb 28 15:06:04 crc kubenswrapper[4897]: I0228 15:06:04.381820 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538186-hjpq4" Feb 28 15:06:04 crc kubenswrapper[4897]: I0228 15:06:04.460091 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkqs5\" (UniqueName: \"kubernetes.io/projected/e46b8174-fe65-446e-965a-6786bbefd8ba-kube-api-access-mkqs5\") pod \"e46b8174-fe65-446e-965a-6786bbefd8ba\" (UID: \"e46b8174-fe65-446e-965a-6786bbefd8ba\") " Feb 28 15:06:04 crc kubenswrapper[4897]: I0228 15:06:04.476676 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e46b8174-fe65-446e-965a-6786bbefd8ba-kube-api-access-mkqs5" (OuterVolumeSpecName: "kube-api-access-mkqs5") pod "e46b8174-fe65-446e-965a-6786bbefd8ba" (UID: "e46b8174-fe65-446e-965a-6786bbefd8ba"). InnerVolumeSpecName "kube-api-access-mkqs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:06:04 crc kubenswrapper[4897]: I0228 15:06:04.562937 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkqs5\" (UniqueName: \"kubernetes.io/projected/e46b8174-fe65-446e-965a-6786bbefd8ba-kube-api-access-mkqs5\") on node \"crc\" DevicePath \"\"" Feb 28 15:06:05 crc kubenswrapper[4897]: I0228 15:06:05.017152 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538186-hjpq4" event={"ID":"e46b8174-fe65-446e-965a-6786bbefd8ba","Type":"ContainerDied","Data":"fb5c5dbd8588afde7bc0bb8ef22034127327ece3f3c70bd5e9a70dda30271e64"} Feb 28 15:06:05 crc kubenswrapper[4897]: I0228 15:06:05.017207 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb5c5dbd8588afde7bc0bb8ef22034127327ece3f3c70bd5e9a70dda30271e64" Feb 28 15:06:05 crc kubenswrapper[4897]: I0228 15:06:05.017276 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538186-hjpq4" Feb 28 15:06:05 crc kubenswrapper[4897]: I0228 15:06:05.467282 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538180-vspx4"] Feb 28 15:06:05 crc kubenswrapper[4897]: I0228 15:06:05.487669 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538180-vspx4"] Feb 28 15:06:06 crc kubenswrapper[4897]: I0228 15:06:06.474501 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfef2bf2-3a6c-4119-8fd2-159efa5e45d1" path="/var/lib/kubelet/pods/dfef2bf2-3a6c-4119-8fd2-159efa5e45d1/volumes" Feb 28 15:06:07 crc kubenswrapper[4897]: I0228 15:06:07.457395 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:06:07 crc kubenswrapper[4897]: E0228 15:06:07.457957 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.003961 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cc5d7cb8-nws5v_d2375f60-8d95-4855-ace5-ecbfadb87114/barbican-api/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.187410 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cc5d7cb8-nws5v_d2375f60-8d95-4855-ace5-ecbfadb87114/barbican-api-log/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.300174 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7566789bf4-gcgqv_ae3d152c-8c19-456d-82a4-184138ae3541/barbican-keystone-listener/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.377092 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-755f78ff99-pb5jr_8315bc28-3362-4d67-9561-f2b8fa3e69b7/barbican-worker/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.384764 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7566789bf4-gcgqv_ae3d152c-8c19-456d-82a4-184138ae3541/barbican-keystone-listener-log/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.526863 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-755f78ff99-pb5jr_8315bc28-3362-4d67-9561-f2b8fa3e69b7/barbican-worker-log/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.565591 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-62dsb_efd25e11-574a-4504-94fc-509e4f367939/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.781850 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_49ad0c65-4304-477c-8cfa-c344fcf2ab9b/proxy-httpd/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.820166 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_49ad0c65-4304-477c-8cfa-c344fcf2ab9b/sg-core/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.829016 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_49ad0c65-4304-477c-8cfa-c344fcf2ab9b/ceilometer-central-agent/0.log" Feb 28 15:06:18 crc kubenswrapper[4897]: I0228 15:06:18.835774 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_49ad0c65-4304-477c-8cfa-c344fcf2ab9b/ceilometer-notification-agent/0.log" Feb 28 15:06:19 crc kubenswrapper[4897]: I0228 15:06:19.023941 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_500bdde3-9ae3-4829-8cee-5e85a7c218a9/cinder-api-log/0.log" Feb 28 15:06:19 crc kubenswrapper[4897]: I0228 15:06:19.365648 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_500bdde3-9ae3-4829-8cee-5e85a7c218a9/cinder-api/0.log" Feb 28 15:06:19 crc kubenswrapper[4897]: I0228 15:06:19.570116 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_35d2e345-c465-43d1-a9e2-0592960bc377/probe/0.log" Feb 28 15:06:19 crc kubenswrapper[4897]: I0228 15:06:19.616130 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b0bef6c5-aed5-464c-8518-9be02ba3cb86/cinder-scheduler/0.log" Feb 28 15:06:19 crc kubenswrapper[4897]: I0228 15:06:19.695894 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_35d2e345-c465-43d1-a9e2-0592960bc377/cinder-backup/0.log" Feb 28 15:06:19 crc kubenswrapper[4897]: I0228 15:06:19.790448 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b0bef6c5-aed5-464c-8518-9be02ba3cb86/probe/0.log" Feb 28 15:06:19 crc kubenswrapper[4897]: I0228 15:06:19.943868 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_622d265c-1cb2-47ac-b31e-5d226545d4de/probe/0.log" Feb 28 15:06:19 crc kubenswrapper[4897]: I0228 15:06:19.949599 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_622d265c-1cb2-47ac-b31e-5d226545d4de/cinder-volume/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.180371 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_6236a51d-66cb-4285-bc2b-767cf39c989a/probe/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.265842 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_6236a51d-66cb-4285-bc2b-767cf39c989a/cinder-volume/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.321273 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-9245w_474a32f3-7317-40c6-80cb-6e36415a2d5d/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.400175 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xss7j_f3a5c5ba-fd5c-468e-b881-4f8cbc47ff21/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.456801 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:06:20 crc kubenswrapper[4897]: E0228 15:06:20.457198 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.533614 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fbb6cc7-qchzg_9045e426-bdc0-4327-8c53-1f3e64d1e3a2/init/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.726358 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fbb6cc7-qchzg_9045e426-bdc0-4327-8c53-1f3e64d1e3a2/init/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.753073 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-x2blf_9b80329c-9e50-4a7f-9e98-e1dc25c4d6fa/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.892965 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fbb6cc7-qchzg_9045e426-bdc0-4327-8c53-1f3e64d1e3a2/dnsmasq-dns/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.957172 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_891bad69-3c9e-4c8a-b5fb-526b4ce79ec5/glance-httpd/0.log" Feb 28 15:06:20 crc kubenswrapper[4897]: I0228 15:06:20.987119 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_891bad69-3c9e-4c8a-b5fb-526b4ce79ec5/glance-log/0.log" Feb 28 15:06:21 crc kubenswrapper[4897]: I0228 15:06:21.152832 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5c9c2403-d54a-4278-b29c-e0533e360579/glance-log/0.log" Feb 28 15:06:21 crc kubenswrapper[4897]: I0228 15:06:21.163469 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5c9c2403-d54a-4278-b29c-e0533e360579/glance-httpd/0.log" Feb 28 15:06:21 crc kubenswrapper[4897]: I0228 15:06:21.418881 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df779db98-ljwk8_e0db6a4f-19e4-488c-bc45-9619565bdf57/horizon/0.log" Feb 28 15:06:21 crc kubenswrapper[4897]: I0228 15:06:21.515118 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9d2hz_fdc8cc43-763f-4d3e-8630-a811a93a4157/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:21 crc kubenswrapper[4897]: I0228 15:06:21.752689 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-d4ng9_8651da53-e976-4395-964b-a5c077d64a26/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:21 crc kubenswrapper[4897]: I0228 15:06:21.761215 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29538121-psnhm_24ea6562-040d-4eb4-865b-692acf8b2a46/keystone-cron/0.log" Feb 28 15:06:21 crc kubenswrapper[4897]: I0228 15:06:21.966067 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29538181-mbx9p_212f33db-61b0-45a1-ac8e-a925bf9eced2/keystone-cron/0.log" Feb 28 15:06:21 crc kubenswrapper[4897]: I0228 15:06:21.985035 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7df779db98-ljwk8_e0db6a4f-19e4-488c-bc45-9619565bdf57/horizon-log/0.log" Feb 28 15:06:22 crc kubenswrapper[4897]: I0228 15:06:22.188839 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d9c5123c-5d3c-47f6-b0d5-20e731e7ebaf/kube-state-metrics/0.log" Feb 28 15:06:22 crc kubenswrapper[4897]: I0228 15:06:22.411974 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-497mc_ff698979-3e20-4b13-9cae-2b0d353cae40/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:22 crc kubenswrapper[4897]: I0228 15:06:22.626390 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d5c8f94c5-9sc2w_b7c377e3-d32d-49da-801c-155853ae1d70/keystone-api/0.log" Feb 28 15:06:22 crc kubenswrapper[4897]: I0228 15:06:22.875163 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59b7cd74f9-xphhh_cfe88e43-2315-4773-85fa-459dab7fb23d/neutron-api/0.log" Feb 28 15:06:22 crc kubenswrapper[4897]: I0228 15:06:22.883034 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-k49nm_e41a407d-96e5-4c5d-8890-fe4cb2f59a0f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:22 crc kubenswrapper[4897]: I0228 15:06:22.897794 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59b7cd74f9-xphhh_cfe88e43-2315-4773-85fa-459dab7fb23d/neutron-httpd/0.log" Feb 28 15:06:23 crc kubenswrapper[4897]: I0228 15:06:23.140665 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_48885530-3df1-42cf-9c7f-2f86a21026a9/setup-container/0.log" Feb 28 15:06:23 crc kubenswrapper[4897]: I0228 15:06:23.290288 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_48885530-3df1-42cf-9c7f-2f86a21026a9/rabbitmq/0.log" Feb 28 15:06:23 crc kubenswrapper[4897]: I0228 15:06:23.301555 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_48885530-3df1-42cf-9c7f-2f86a21026a9/setup-container/0.log" Feb 28 15:06:23 crc kubenswrapper[4897]: I0228 15:06:23.891757 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bf3d7f16-bcfc-4fa4-92d4-9b03f42375de/nova-cell0-conductor-conductor/0.log" Feb 28 15:06:24 crc kubenswrapper[4897]: I0228 15:06:24.140821 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6f3fc432-044c-4be6-b1b3-049e2d2842d5/nova-cell1-conductor-conductor/0.log" Feb 28 15:06:24 crc kubenswrapper[4897]: I0228 15:06:24.598789 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b200a830-20fd-475c-bf9f-7c17ae963355/nova-cell1-novncproxy-novncproxy/0.log" Feb 28 15:06:24 crc kubenswrapper[4897]: I0228 15:06:24.615425 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-ls724_1fc98763-e64a-41e1-a4ff-0c72faa961fe/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:24 crc kubenswrapper[4897]: I0228 15:06:24.636722 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fede0b0b-b487-4e63-9622-4863d3575d89/nova-api-log/0.log" Feb 28 15:06:24 crc kubenswrapper[4897]: I0228 15:06:24.938148 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_151132bb-bcf9-4d40-a72b-5f6b80c23fb1/nova-metadata-log/0.log" Feb 28 15:06:25 crc kubenswrapper[4897]: I0228 15:06:25.297473 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fede0b0b-b487-4e63-9622-4863d3575d89/nova-api-api/0.log" Feb 28 15:06:25 crc kubenswrapper[4897]: I0228 15:06:25.492448 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d7f297ea-652d-47ae-9831-fad10c6127ad/mysql-bootstrap/0.log" Feb 28 15:06:25 crc kubenswrapper[4897]: I0228 15:06:25.492550 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f5c10f14-b08c-4267-8436-22d028c4db66/nova-scheduler-scheduler/0.log" Feb 28 15:06:25 crc kubenswrapper[4897]: I0228 15:06:25.701613 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d7f297ea-652d-47ae-9831-fad10c6127ad/mysql-bootstrap/0.log" Feb 28 15:06:25 crc kubenswrapper[4897]: I0228 15:06:25.750174 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d7f297ea-652d-47ae-9831-fad10c6127ad/galera/0.log" Feb 28 15:06:25 crc kubenswrapper[4897]: I0228 15:06:25.901618 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_db99e06f-c263-4aef-b5c2-330eaed29fd4/mysql-bootstrap/0.log" Feb 28 15:06:26 crc kubenswrapper[4897]: I0228 15:06:26.065354 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_db99e06f-c263-4aef-b5c2-330eaed29fd4/galera/0.log" Feb 28 15:06:26 crc kubenswrapper[4897]: I0228 15:06:26.080107 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_db99e06f-c263-4aef-b5c2-330eaed29fd4/mysql-bootstrap/0.log" Feb 28 15:06:26 crc kubenswrapper[4897]: I0228 15:06:26.357464 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_768007b3-82d1-4b63-b96f-4d8797b46acc/openstackclient/0.log" Feb 28 15:06:26 crc kubenswrapper[4897]: I0228 15:06:26.401972 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jsdwb_cd2fa5a5-caab-4d3d-8324-f6107d50f59f/ovn-controller/0.log" Feb 28 15:06:26 crc kubenswrapper[4897]: I0228 15:06:26.579851 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-598bf_5ab588f4-9fad-44d6-a7e2-2e99b19ef285/openstack-network-exporter/0.log" Feb 28 15:06:26 crc kubenswrapper[4897]: I0228 15:06:26.763352 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ch9bl_995bc563-52dc-4755-b43f-96a2746d8bce/ovsdb-server-init/0.log" Feb 28 15:06:26 crc kubenswrapper[4897]: I0228 15:06:26.900165 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ch9bl_995bc563-52dc-4755-b43f-96a2746d8bce/ovsdb-server-init/0.log" Feb 28 15:06:26 crc kubenswrapper[4897]: I0228 15:06:26.949463 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ch9bl_995bc563-52dc-4755-b43f-96a2746d8bce/ovsdb-server/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.175037 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-vskr8_ccec52af-4ae3-42de-bead-6b28a6e8c739/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.350047 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ch9bl_995bc563-52dc-4755-b43f-96a2746d8bce/ovs-vswitchd/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.369913 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f3afe36e-988c-4fca-8ca8-c24353046ea7/openstack-network-exporter/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.463127 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_151132bb-bcf9-4d40-a72b-5f6b80c23fb1/nova-metadata-metadata/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.561198 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f3afe36e-988c-4fca-8ca8-c24353046ea7/ovn-northd/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.608283 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_03ffdd06-e63d-4a43-96f0-92e2d0e3a89d/openstack-network-exporter/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.674953 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_03ffdd06-e63d-4a43-96f0-92e2d0e3a89d/ovsdbserver-nb/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.806362 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_48d78132-b30d-4c29-8137-7af1597f8cc6/openstack-network-exporter/0.log" Feb 28 15:06:27 crc kubenswrapper[4897]: I0228 15:06:27.818503 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_48d78132-b30d-4c29-8137-7af1597f8cc6/ovsdbserver-sb/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.138759 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/init-config-reloader/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.279096 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-778b749bdb-bmqwf_a2f1a9fc-a42b-488a-a7a6-207157fd1205/placement-api/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.318673 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/init-config-reloader/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.337009 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-778b749bdb-bmqwf_a2f1a9fc-a42b-488a-a7a6-207157fd1205/placement-log/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.369582 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/config-reloader/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.493761 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/prometheus/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.509337 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6b56bf6f-f92e-4b96-a449-597cee08338d/thanos-sidecar/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.534866 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_59883b9c-0fbf-4d9e-84ee-f9456a6f13aa/setup-container/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.771592 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_59883b9c-0fbf-4d9e-84ee-f9456a6f13aa/rabbitmq/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.814542 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_59883b9c-0fbf-4d9e-84ee-f9456a6f13aa/setup-container/0.log" Feb 28 15:06:28 crc kubenswrapper[4897]: I0228 15:06:28.825642 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd/setup-container/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.037224 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd/rabbitmq/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.037774 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c0bcff4-e976-48c4-9ad8-5e79ebe1d2bd/setup-container/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.105434 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vxmr9_0b6d041b-3a22-45fa-bd9e-33dea9dc98aa/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.225259 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t9clw_368bd0f8-b828-44ed-a605-3aabab81c9c1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.412813 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lr95h_3ec9b581-f18e-4ae6-b520-c19ecfc75ab3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.527761 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ccmbs_bfdbf8bc-0180-406e-884b-cfd88b6ae1a3/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.642651 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vrbrt_0a198568-b27e-4e65-bc3f-6b70f3184b6b/ssh-known-hosts-edpm-deployment/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.792570 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7765f74f9-bjr4m_2ea92bb0-3068-4ffe-b85c-ce041cc1911e/proxy-server/0.log" Feb 28 15:06:29 crc kubenswrapper[4897]: I0228 15:06:29.986540 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gpcgs_41910cc3-f0b4-4e6d-9c2e-562794444c84/swift-ring-rebalance/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.037892 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7765f74f9-bjr4m_2ea92bb0-3068-4ffe-b85c-ce041cc1911e/proxy-httpd/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.072701 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/account-auditor/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.192152 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/account-reaper/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.273568 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/account-replicator/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.312863 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/container-auditor/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.315137 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/account-server/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.388138 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/container-replicator/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.478149 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/container-updater/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.499273 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/container-server/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.518746 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-auditor/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.624229 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-expirer/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.669124 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-replicator/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.697602 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-server/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.725959 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/object-updater/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.826910 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/rsync/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.841011 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e07793a7-3e98-4a8d-bfb6-3c630f07d391/swift-recon-cron/0.log" Feb 28 15:06:30 crc kubenswrapper[4897]: I0228 15:06:30.998620 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-b5zbb_8356fe56-9405-43be-8d6e-3d71c9906864/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:31 crc kubenswrapper[4897]: I0228 15:06:31.064815 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_49f3154b-02e1-4da4-a498-58e7280a8a64/tempest-tests-tempest-tests-runner/0.log" Feb 28 15:06:31 crc kubenswrapper[4897]: I0228 15:06:31.211840 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9b32d426-3313-4f78-9baa-92b8717b8d8e/test-operator-logs-container/0.log" Feb 28 15:06:31 crc kubenswrapper[4897]: I0228 15:06:31.330695 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9j6ds_81fd26ee-0f11-49a1-863c-86aefccd7f6d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 15:06:32 crc kubenswrapper[4897]: I0228 15:06:32.488036 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_9f2ebd5f-fa7e-4ca3-9bd9-4b54c05f8060/watcher-applier/0.log" Feb 28 15:06:32 crc kubenswrapper[4897]: I0228 15:06:32.874449 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_f7a66d06-fda4-4801-8a7e-24acf64224ac/watcher-api-log/0.log" Feb 28 15:06:34 crc kubenswrapper[4897]: I0228 15:06:34.457156 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:06:34 crc kubenswrapper[4897]: E0228 15:06:34.457949 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:06:36 crc kubenswrapper[4897]: I0228 15:06:36.037394 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_f31b98f7-e894-4ba1-99d0-c9f4dfe066a9/watcher-decision-engine/0.log" Feb 28 15:06:37 crc kubenswrapper[4897]: I0228 15:06:37.099713 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_f7a66d06-fda4-4801-8a7e-24acf64224ac/watcher-api/0.log" Feb 28 15:06:38 crc kubenswrapper[4897]: I0228 15:06:38.430340 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f032f5e9-4992-4586-bd47-0c3da76ecf40/memcached/0.log" Feb 28 15:06:46 crc kubenswrapper[4897]: I0228 15:06:46.463596 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:06:46 crc kubenswrapper[4897]: E0228 15:06:46.464388 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:06:54 crc kubenswrapper[4897]: I0228 15:06:54.806082 4897 scope.go:117] "RemoveContainer" containerID="59909f934cebc2d05f1aa1faa656a9f3796d48e62cd28f6f1f3953068f3f8f65" Feb 28 15:07:00 crc kubenswrapper[4897]: I0228 15:07:00.272471 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/util/0.log" Feb 28 15:07:00 crc kubenswrapper[4897]: I0228 15:07:00.546746 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/util/0.log" Feb 28 15:07:00 crc kubenswrapper[4897]: I0228 15:07:00.575450 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/pull/0.log" Feb 28 15:07:00 crc kubenswrapper[4897]: I0228 15:07:00.648022 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/pull/0.log" Feb 28 15:07:00 crc kubenswrapper[4897]: I0228 15:07:00.843196 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/pull/0.log" Feb 28 15:07:00 crc kubenswrapper[4897]: I0228 15:07:00.880634 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/extract/0.log" Feb 28 15:07:00 crc kubenswrapper[4897]: I0228 15:07:00.907872 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c54673912e4d03f04ee1c9df25895fcc11c87218fcae2b49712dc18b0dtt7h8_18614093-3dcd-426c-8821-d04f854a475c/util/0.log" Feb 28 15:07:01 crc kubenswrapper[4897]: I0228 15:07:01.456866 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:07:01 crc kubenswrapper[4897]: E0228 15:07:01.457111 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:07:01 crc kubenswrapper[4897]: I0228 15:07:01.548026 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-hgfm4_a78107ef-804f-476a-98f4-195f52927c3d/manager/0.log" Feb 28 15:07:01 crc kubenswrapper[4897]: I0228 15:07:01.849098 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-4tvzl_5863afa6-053e-4d6c-899e-c31dcc30dcf3/manager/0.log" Feb 28 15:07:02 crc kubenswrapper[4897]: I0228 15:07:02.072899 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-pjmt7_e7498ffc-cb24-44e8-b0cb-4ada46db9e4c/manager/0.log" Feb 28 15:07:02 crc kubenswrapper[4897]: I0228 15:07:02.376357 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-qjg9q_cf8aae65-a739-4ab3-8208-ae8ac4ed0671/manager/0.log" Feb 28 15:07:02 crc kubenswrapper[4897]: I0228 15:07:02.799278 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-cfsb9_507c84e1-3826-47ad-93f4-c2d6d726f8b7/manager/0.log" Feb 28 15:07:03 crc kubenswrapper[4897]: I0228 15:07:03.284525 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-bb7d9_3bfb71f8-fd2c-4730-af54-601ec4daebaf/manager/0.log" Feb 28 15:07:03 crc kubenswrapper[4897]: I0228 15:07:03.310209 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-fm9lk_30b14df1-8f3e-427c-b6d9-eb8aeb192213/manager/0.log" Feb 28 15:07:03 crc kubenswrapper[4897]: I0228 15:07:03.513132 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-h65l6_c90ec355-3eb2-43e5-9a39-eed72bb46d1b/manager/0.log" Feb 28 15:07:03 crc kubenswrapper[4897]: I0228 15:07:03.744390 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-6xfvp_30810ec7-8325-4bde-aa9d-ff905addb474/manager/0.log" Feb 28 15:07:03 crc kubenswrapper[4897]: I0228 15:07:03.783918 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-d8psr_5ef2847d-3e11-419b-b34c-3f4cb5643af9/manager/0.log" Feb 28 15:07:03 crc kubenswrapper[4897]: I0228 15:07:03.984870 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-7lr7s_3664d59e-945d-4eb5-9443-296e206a1081/manager/0.log" Feb 28 15:07:04 crc kubenswrapper[4897]: I0228 15:07:04.224958 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-wqgwr_a9935d62-a205-4294-a124-313a8437c1ab/manager/0.log" Feb 28 15:07:04 crc kubenswrapper[4897]: I0228 15:07:04.233378 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-wrf59_b237a99b-2fe2-4804-880b-03494df684d2/manager/0.log" Feb 28 15:07:04 crc kubenswrapper[4897]: I0228 15:07:04.489755 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c5bszd_fe6be473-8403-4c9d-abf6-a7a0251326f9/manager/0.log" Feb 28 15:07:04 crc kubenswrapper[4897]: I0228 15:07:04.634656 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-58b8f68975-4gtm4_f3e65b5d-7974-4323-92f1-50f5dbc0fe11/operator/0.log" Feb 28 15:07:04 crc kubenswrapper[4897]: I0228 15:07:04.719798 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9tgxh_e5918346-7c71-4d39-985f-c8893e107670/registry-server/0.log" Feb 28 15:07:04 crc kubenswrapper[4897]: I0228 15:07:04.964470 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-p6pbb_3d635198-c21d-4d2e-9393-ad9b6cdf462f/manager/0.log" Feb 28 15:07:05 crc kubenswrapper[4897]: I0228 15:07:05.071909 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-hkkvm_c434bb35-55df-45b5-9eeb-ab9913f3fd5e/manager/0.log" Feb 28 15:07:05 crc kubenswrapper[4897]: I0228 15:07:05.276756 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-w6zhg_216a4a66-0783-4b6c-9884-370bd3a001a4/operator/0.log" Feb 28 15:07:05 crc kubenswrapper[4897]: I0228 15:07:05.397245 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-dnpdj_8c8044b8-c803-4b5b-916f-34c0c03ab619/manager/0.log" Feb 28 15:07:05 crc kubenswrapper[4897]: I0228 15:07:05.656621 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-v7qbm_37408ab3-7514-42a0-92e8-6c2a2710b9f0/manager/0.log" Feb 28 15:07:05 crc kubenswrapper[4897]: I0228 15:07:05.766013 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-6r8pc_f5e3f361-0ca8-4a8f-8625-8ea90c292ac2/manager/0.log" Feb 28 15:07:06 crc kubenswrapper[4897]: I0228 15:07:06.031522 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-69dbd6f547-4ng5q_c25839b5-c34e-4865-a5ad-4e10355f1953/manager/0.log" Feb 28 15:07:06 crc kubenswrapper[4897]: I0228 15:07:06.538409 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d8778d855-4x57f_6532860c-c344-4a74-9189-4382f4865b58/manager/0.log" Feb 28 15:07:10 crc kubenswrapper[4897]: I0228 15:07:10.790007 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-96lzs_1d330dac-b70b-4af0-bfa0-1fba21022fb1/manager/0.log" Feb 28 15:07:16 crc kubenswrapper[4897]: I0228 15:07:16.471751 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:07:16 crc kubenswrapper[4897]: E0228 15:07:16.472643 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:07:27 crc kubenswrapper[4897]: I0228 15:07:27.157015 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-glzrp_49308413-0bd0-4aef-8d1b-451b077e6996/control-plane-machine-set-operator/0.log" Feb 28 15:07:27 crc kubenswrapper[4897]: I0228 15:07:27.310745 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zkvs9_df2319dd-b85c-4542-bf25-8233ecda9d78/kube-rbac-proxy/0.log" Feb 28 15:07:27 crc kubenswrapper[4897]: I0228 15:07:27.363874 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zkvs9_df2319dd-b85c-4542-bf25-8233ecda9d78/machine-api-operator/0.log" Feb 28 15:07:29 crc kubenswrapper[4897]: I0228 15:07:29.457282 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:07:29 crc kubenswrapper[4897]: E0228 15:07:29.458282 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:07:41 crc kubenswrapper[4897]: I0228 15:07:41.676660 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ld6gk_575a7b09-2bc9-458a-bdbc-169241a67869/cert-manager-controller/0.log" Feb 28 15:07:41 crc kubenswrapper[4897]: I0228 15:07:41.872939 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f4grq_b868e69f-c259-4f0e-9f12-7b0be2e26d03/cert-manager-cainjector/0.log" Feb 28 15:07:41 crc kubenswrapper[4897]: I0228 15:07:41.929145 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5vvcp_80a798fa-b6e2-4063-95a5-56c55dec24b0/cert-manager-webhook/0.log" Feb 28 15:07:42 crc kubenswrapper[4897]: I0228 15:07:42.456964 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:07:42 crc kubenswrapper[4897]: E0228 15:07:42.457784 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:07:55 crc kubenswrapper[4897]: I0228 15:07:55.457153 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:07:55 crc kubenswrapper[4897]: E0228 15:07:55.458188 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:07:57 crc kubenswrapper[4897]: I0228 15:07:57.179352 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-dmxhv_0b30e3b3-0280-45c0-ad26-00ab9dff49ce/nmstate-console-plugin/0.log" Feb 28 15:07:57 crc kubenswrapper[4897]: I0228 15:07:57.426118 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-64kn2_a16c5c73-6515-4d5b-898e-aa6d3940f0b1/nmstate-metrics/0.log" Feb 28 15:07:57 crc kubenswrapper[4897]: I0228 15:07:57.434337 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-w8lgm_b1e7c059-1db9-417a-8bd9-b5157303f3af/nmstate-handler/0.log" Feb 28 15:07:57 crc kubenswrapper[4897]: I0228 15:07:57.480412 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-64kn2_a16c5c73-6515-4d5b-898e-aa6d3940f0b1/kube-rbac-proxy/0.log" Feb 28 15:07:57 crc kubenswrapper[4897]: I0228 15:07:57.587010 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-468lb_ce9efcef-4478-4127-a41e-9e9960084a46/nmstate-operator/0.log" Feb 28 15:07:57 crc kubenswrapper[4897]: I0228 15:07:57.712957 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-qtkdc_5ae61471-c126-4bb0-b7c5-1b56f1686ecc/nmstate-webhook/0.log" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.161368 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538188-xxktt"] Feb 28 15:08:00 crc kubenswrapper[4897]: E0228 15:08:00.162406 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46b8174-fe65-446e-965a-6786bbefd8ba" containerName="oc" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.162434 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46b8174-fe65-446e-965a-6786bbefd8ba" containerName="oc" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.162823 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e46b8174-fe65-446e-965a-6786bbefd8ba" containerName="oc" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.163913 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538188-xxktt" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.166506 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.166615 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.169020 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.174348 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538188-xxktt"] Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.318239 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nhbt\" (UniqueName: \"kubernetes.io/projected/b18cea1a-5cd9-4e95-b89a-6345e4b812f2-kube-api-access-6nhbt\") pod \"auto-csr-approver-29538188-xxktt\" (UID: \"b18cea1a-5cd9-4e95-b89a-6345e4b812f2\") " pod="openshift-infra/auto-csr-approver-29538188-xxktt" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.420593 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nhbt\" (UniqueName: \"kubernetes.io/projected/b18cea1a-5cd9-4e95-b89a-6345e4b812f2-kube-api-access-6nhbt\") pod \"auto-csr-approver-29538188-xxktt\" (UID: \"b18cea1a-5cd9-4e95-b89a-6345e4b812f2\") " pod="openshift-infra/auto-csr-approver-29538188-xxktt" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.440473 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nhbt\" (UniqueName: \"kubernetes.io/projected/b18cea1a-5cd9-4e95-b89a-6345e4b812f2-kube-api-access-6nhbt\") pod \"auto-csr-approver-29538188-xxktt\" (UID: \"b18cea1a-5cd9-4e95-b89a-6345e4b812f2\") " pod="openshift-infra/auto-csr-approver-29538188-xxktt" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.495038 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538188-xxktt" Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.979942 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538188-xxktt"] Feb 28 15:08:00 crc kubenswrapper[4897]: W0228 15:08:00.988492 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb18cea1a_5cd9_4e95_b89a_6345e4b812f2.slice/crio-09edcc57a12da1a95e98d1b14e363713f55a5f1b0e030343ed65fd91bf6fe727 WatchSource:0}: Error finding container 09edcc57a12da1a95e98d1b14e363713f55a5f1b0e030343ed65fd91bf6fe727: Status 404 returned error can't find the container with id 09edcc57a12da1a95e98d1b14e363713f55a5f1b0e030343ed65fd91bf6fe727 Feb 28 15:08:00 crc kubenswrapper[4897]: I0228 15:08:00.991046 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 15:08:01 crc kubenswrapper[4897]: I0228 15:08:01.252077 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538188-xxktt" event={"ID":"b18cea1a-5cd9-4e95-b89a-6345e4b812f2","Type":"ContainerStarted","Data":"09edcc57a12da1a95e98d1b14e363713f55a5f1b0e030343ed65fd91bf6fe727"} Feb 28 15:08:03 crc kubenswrapper[4897]: I0228 15:08:03.271053 4897 generic.go:334] "Generic (PLEG): container finished" podID="b18cea1a-5cd9-4e95-b89a-6345e4b812f2" containerID="b3f2005529d52556768067b4f3313c44a544b9eb4ab0fe78abcfd0511c25f66d" exitCode=0 Feb 28 15:08:03 crc kubenswrapper[4897]: I0228 15:08:03.271234 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538188-xxktt" event={"ID":"b18cea1a-5cd9-4e95-b89a-6345e4b812f2","Type":"ContainerDied","Data":"b3f2005529d52556768067b4f3313c44a544b9eb4ab0fe78abcfd0511c25f66d"} Feb 28 15:08:04 crc kubenswrapper[4897]: I0228 15:08:04.665289 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538188-xxktt" Feb 28 15:08:04 crc kubenswrapper[4897]: I0228 15:08:04.821722 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nhbt\" (UniqueName: \"kubernetes.io/projected/b18cea1a-5cd9-4e95-b89a-6345e4b812f2-kube-api-access-6nhbt\") pod \"b18cea1a-5cd9-4e95-b89a-6345e4b812f2\" (UID: \"b18cea1a-5cd9-4e95-b89a-6345e4b812f2\") " Feb 28 15:08:04 crc kubenswrapper[4897]: I0228 15:08:04.827560 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18cea1a-5cd9-4e95-b89a-6345e4b812f2-kube-api-access-6nhbt" (OuterVolumeSpecName: "kube-api-access-6nhbt") pod "b18cea1a-5cd9-4e95-b89a-6345e4b812f2" (UID: "b18cea1a-5cd9-4e95-b89a-6345e4b812f2"). InnerVolumeSpecName "kube-api-access-6nhbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:08:04 crc kubenswrapper[4897]: I0228 15:08:04.925113 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nhbt\" (UniqueName: \"kubernetes.io/projected/b18cea1a-5cd9-4e95-b89a-6345e4b812f2-kube-api-access-6nhbt\") on node \"crc\" DevicePath \"\"" Feb 28 15:08:05 crc kubenswrapper[4897]: I0228 15:08:05.292805 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538188-xxktt" event={"ID":"b18cea1a-5cd9-4e95-b89a-6345e4b812f2","Type":"ContainerDied","Data":"09edcc57a12da1a95e98d1b14e363713f55a5f1b0e030343ed65fd91bf6fe727"} Feb 28 15:08:05 crc kubenswrapper[4897]: I0228 15:08:05.292878 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09edcc57a12da1a95e98d1b14e363713f55a5f1b0e030343ed65fd91bf6fe727" Feb 28 15:08:05 crc kubenswrapper[4897]: I0228 15:08:05.292904 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538188-xxktt" Feb 28 15:08:05 crc kubenswrapper[4897]: I0228 15:08:05.748965 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538182-v7drs"] Feb 28 15:08:05 crc kubenswrapper[4897]: I0228 15:08:05.759153 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538182-v7drs"] Feb 28 15:08:06 crc kubenswrapper[4897]: I0228 15:08:06.474672 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8127f512-c9ca-4dd4-83a8-ecc16e229187" path="/var/lib/kubelet/pods/8127f512-c9ca-4dd4-83a8-ecc16e229187/volumes" Feb 28 15:08:10 crc kubenswrapper[4897]: I0228 15:08:10.456860 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:08:10 crc kubenswrapper[4897]: E0228 15:08:10.457700 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:08:13 crc kubenswrapper[4897]: I0228 15:08:13.367018 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w78wk_13c34d90-e126-4392-9f0d-31436773d681/prometheus-operator/0.log" Feb 28 15:08:13 crc kubenswrapper[4897]: I0228 15:08:13.484143 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-767759c544-pwwvk_77de0da5-c400-4927-bd0f-15d2ba642291/prometheus-operator-admission-webhook/0.log" Feb 28 15:08:13 crc kubenswrapper[4897]: I0228 15:08:13.541859 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-767759c544-sq8hd_c28960c4-dba8-4bc2-8695-13bc86523823/prometheus-operator-admission-webhook/0.log" Feb 28 15:08:13 crc kubenswrapper[4897]: I0228 15:08:13.711970 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-qkkz2_b1a1168a-8c63-4e9c-aefc-732c90395b55/operator/0.log" Feb 28 15:08:13 crc kubenswrapper[4897]: I0228 15:08:13.740230 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tr862_799fd3ea-6ae8-4568-a69b-3e8c2a706b76/perses-operator/0.log" Feb 28 15:08:25 crc kubenswrapper[4897]: I0228 15:08:25.456436 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:08:25 crc kubenswrapper[4897]: E0228 15:08:25.457406 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brq22_openshift-machine-config-operator(6c4091e4-3a55-4913-81f3-026a1a97c57c)\"" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" Feb 28 15:08:29 crc kubenswrapper[4897]: I0228 15:08:29.466145 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-jz56q_5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce/controller/0.log" Feb 28 15:08:29 crc kubenswrapper[4897]: I0228 15:08:29.473429 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-jz56q_5a9e2956-4bb5-4986-a8f0-a1a5bfd230ce/kube-rbac-proxy/0.log" Feb 28 15:08:29 crc kubenswrapper[4897]: I0228 15:08:29.667694 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-frr-files/0.log" Feb 28 15:08:29 crc kubenswrapper[4897]: I0228 15:08:29.824726 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-reloader/0.log" Feb 28 15:08:29 crc kubenswrapper[4897]: I0228 15:08:29.832657 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-frr-files/0.log" Feb 28 15:08:29 crc kubenswrapper[4897]: I0228 15:08:29.855304 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-reloader/0.log" Feb 28 15:08:29 crc kubenswrapper[4897]: I0228 15:08:29.885782 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-metrics/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.029097 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-frr-files/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.052002 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-metrics/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.079364 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-reloader/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.091412 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-metrics/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.327865 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-reloader/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.331582 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-metrics/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.355140 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/controller/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.371536 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/cp-frr-files/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.594188 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/kube-rbac-proxy-frr/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.657393 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/frr-metrics/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.667474 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/kube-rbac-proxy/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.845057 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/reloader/0.log" Feb 28 15:08:30 crc kubenswrapper[4897]: I0228 15:08:30.979750 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-m4dnz_6019677c-387b-4cb8-9c0f-4607f2b5971c/frr-k8s-webhook-server/0.log" Feb 28 15:08:31 crc kubenswrapper[4897]: I0228 15:08:31.131667 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7996b9d6bf-xmdxr_1c3404c1-8c8b-4cf9-89dd-8f370ad776e2/manager/0.log" Feb 28 15:08:31 crc kubenswrapper[4897]: I0228 15:08:31.285565 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cc84c5f94-tk95x_3efe124f-7df2-4c2b-ad84-f8674f4d4fb8/webhook-server/0.log" Feb 28 15:08:31 crc kubenswrapper[4897]: I0228 15:08:31.549598 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xqdlt_f599a5af-52e7-429e-9159-2959003096c7/kube-rbac-proxy/0.log" Feb 28 15:08:32 crc kubenswrapper[4897]: I0228 15:08:32.057531 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xqdlt_f599a5af-52e7-429e-9159-2959003096c7/speaker/0.log" Feb 28 15:08:32 crc kubenswrapper[4897]: I0228 15:08:32.454048 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mct2w_02f6fadd-b5a9-4d44-aba2-303ab05f15c6/frr/0.log" Feb 28 15:08:38 crc kubenswrapper[4897]: I0228 15:08:38.456195 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:08:39 crc kubenswrapper[4897]: I0228 15:08:39.677597 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"683c189040de62491242f0b16208dca83b8659111e6285e55466bae109b218e5"} Feb 28 15:08:47 crc kubenswrapper[4897]: I0228 15:08:47.375954 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/util/0.log" Feb 28 15:08:47 crc kubenswrapper[4897]: I0228 15:08:47.767018 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/util/0.log" Feb 28 15:08:47 crc kubenswrapper[4897]: I0228 15:08:47.779086 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/pull/0.log" Feb 28 15:08:47 crc kubenswrapper[4897]: I0228 15:08:47.795896 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/pull/0.log" Feb 28 15:08:47 crc kubenswrapper[4897]: I0228 15:08:47.973808 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/pull/0.log" Feb 28 15:08:47 crc kubenswrapper[4897]: I0228 15:08:47.989675 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/util/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.009669 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274wtt_fb9ce3a3-2d08-4df3-b0c5-246ef0bfc641/extract/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.147835 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/util/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.356364 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/util/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.379222 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/pull/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.433022 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/pull/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.568239 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/util/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.613106 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/extract/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.630102 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088rmgf_10011d40-3da8-4a8f-b650-17d5bcbd7f8a/pull/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.762929 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-utilities/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.921816 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-utilities/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.921969 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-content/0.log" Feb 28 15:08:48 crc kubenswrapper[4897]: I0228 15:08:48.942817 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-content/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.166548 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-content/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.206274 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/extract-utilities/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.419713 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-utilities/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.606589 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-content/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.620296 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-utilities/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.748252 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-content/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.908156 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-content/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.916796 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/extract-utilities/0.log" Feb 28 15:08:49 crc kubenswrapper[4897]: I0228 15:08:49.998670 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wjmzz_f60bfd3b-75e8-49ec-bc18-32660c88045d/registry-server/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.171490 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/util/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.413468 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/pull/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.422015 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/util/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.423916 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/pull/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.735097 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/util/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.779073 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/extract/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.781650 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4n72v5_c76ed8b8-228d-4263-addb-9571183ab82d/pull/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.788589 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrtf6_35856bb5-8436-497d-a4c1-2dac4df4a552/registry-server/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.961339 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-utilities/0.log" Feb 28 15:08:50 crc kubenswrapper[4897]: I0228 15:08:50.996680 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b4nxz_b38ea4e8-edc9-4c30-8189-dbcc29bc677e/marketplace-operator/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.211580 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-utilities/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.228247 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-content/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.233409 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-content/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.413921 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-content/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.434648 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/extract-utilities/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.663795 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-utilities/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.668354 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fhsw4_f72e233a-6e31-4ca5-b12e-3c4213a80ad6/registry-server/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.850945 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-content/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.856080 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-utilities/0.log" Feb 28 15:08:51 crc kubenswrapper[4897]: I0228 15:08:51.858845 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-content/0.log" Feb 28 15:08:52 crc kubenswrapper[4897]: I0228 15:08:52.044124 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-content/0.log" Feb 28 15:08:52 crc kubenswrapper[4897]: I0228 15:08:52.086814 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/extract-utilities/0.log" Feb 28 15:08:54 crc kubenswrapper[4897]: I0228 15:08:54.464383 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tr46p_54378322-d915-43c1-a3d9-837fd5b9121d/registry-server/0.log" Feb 28 15:08:54 crc kubenswrapper[4897]: I0228 15:08:54.921324 4897 scope.go:117] "RemoveContainer" containerID="15cf924822b4b196d28d1b6eeaf02690a8ceee4a21b5190aa6e349a22bcd5a00" Feb 28 15:09:08 crc kubenswrapper[4897]: I0228 15:09:08.258403 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w78wk_13c34d90-e126-4392-9f0d-31436773d681/prometheus-operator/0.log" Feb 28 15:09:08 crc kubenswrapper[4897]: I0228 15:09:08.269590 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-767759c544-pwwvk_77de0da5-c400-4927-bd0f-15d2ba642291/prometheus-operator-admission-webhook/0.log" Feb 28 15:09:08 crc kubenswrapper[4897]: I0228 15:09:08.330398 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-767759c544-sq8hd_c28960c4-dba8-4bc2-8695-13bc86523823/prometheus-operator-admission-webhook/0.log" Feb 28 15:09:08 crc kubenswrapper[4897]: I0228 15:09:08.437004 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tr862_799fd3ea-6ae8-4568-a69b-3e8c2a706b76/perses-operator/0.log" Feb 28 15:09:08 crc kubenswrapper[4897]: I0228 15:09:08.487330 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-qkkz2_b1a1168a-8c63-4e9c-aefc-732c90395b55/operator/0.log" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.227402 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538190-drtwg"] Feb 28 15:10:00 crc kubenswrapper[4897]: E0228 15:10:00.228582 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18cea1a-5cd9-4e95-b89a-6345e4b812f2" containerName="oc" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.228600 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18cea1a-5cd9-4e95-b89a-6345e4b812f2" containerName="oc" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.228925 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18cea1a-5cd9-4e95-b89a-6345e4b812f2" containerName="oc" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.229807 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538190-drtwg" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.234768 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.234977 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.235202 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.242724 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538190-drtwg"] Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.401895 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntsjv\" (UniqueName: \"kubernetes.io/projected/d0e53367-1221-4294-8433-0aa02b6b3271-kube-api-access-ntsjv\") pod \"auto-csr-approver-29538190-drtwg\" (UID: \"d0e53367-1221-4294-8433-0aa02b6b3271\") " pod="openshift-infra/auto-csr-approver-29538190-drtwg" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.503635 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntsjv\" (UniqueName: \"kubernetes.io/projected/d0e53367-1221-4294-8433-0aa02b6b3271-kube-api-access-ntsjv\") pod \"auto-csr-approver-29538190-drtwg\" (UID: \"d0e53367-1221-4294-8433-0aa02b6b3271\") " pod="openshift-infra/auto-csr-approver-29538190-drtwg" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.534178 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntsjv\" (UniqueName: \"kubernetes.io/projected/d0e53367-1221-4294-8433-0aa02b6b3271-kube-api-access-ntsjv\") pod \"auto-csr-approver-29538190-drtwg\" (UID: \"d0e53367-1221-4294-8433-0aa02b6b3271\") " pod="openshift-infra/auto-csr-approver-29538190-drtwg" Feb 28 15:10:00 crc kubenswrapper[4897]: I0228 15:10:00.557006 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538190-drtwg" Feb 28 15:10:01 crc kubenswrapper[4897]: I0228 15:10:01.053542 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538190-drtwg"] Feb 28 15:10:01 crc kubenswrapper[4897]: W0228 15:10:01.064246 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0e53367_1221_4294_8433_0aa02b6b3271.slice/crio-8571af72da707966373503979018759cb99d47dd0910b636d0cc6141c2bfa771 WatchSource:0}: Error finding container 8571af72da707966373503979018759cb99d47dd0910b636d0cc6141c2bfa771: Status 404 returned error can't find the container with id 8571af72da707966373503979018759cb99d47dd0910b636d0cc6141c2bfa771 Feb 28 15:10:01 crc kubenswrapper[4897]: I0228 15:10:01.490711 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538190-drtwg" event={"ID":"d0e53367-1221-4294-8433-0aa02b6b3271","Type":"ContainerStarted","Data":"8571af72da707966373503979018759cb99d47dd0910b636d0cc6141c2bfa771"} Feb 28 15:10:03 crc kubenswrapper[4897]: I0228 15:10:03.518817 4897 generic.go:334] "Generic (PLEG): container finished" podID="d0e53367-1221-4294-8433-0aa02b6b3271" containerID="127beace58b1be37b7fcbecd5eccfcf8faec868e2f09e2a85ad49b4b7b6ed267" exitCode=0 Feb 28 15:10:03 crc kubenswrapper[4897]: I0228 15:10:03.519121 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538190-drtwg" event={"ID":"d0e53367-1221-4294-8433-0aa02b6b3271","Type":"ContainerDied","Data":"127beace58b1be37b7fcbecd5eccfcf8faec868e2f09e2a85ad49b4b7b6ed267"} Feb 28 15:10:04 crc kubenswrapper[4897]: I0228 15:10:04.906131 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538190-drtwg" Feb 28 15:10:05 crc kubenswrapper[4897]: I0228 15:10:05.013424 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntsjv\" (UniqueName: \"kubernetes.io/projected/d0e53367-1221-4294-8433-0aa02b6b3271-kube-api-access-ntsjv\") pod \"d0e53367-1221-4294-8433-0aa02b6b3271\" (UID: \"d0e53367-1221-4294-8433-0aa02b6b3271\") " Feb 28 15:10:05 crc kubenswrapper[4897]: I0228 15:10:05.023853 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e53367-1221-4294-8433-0aa02b6b3271-kube-api-access-ntsjv" (OuterVolumeSpecName: "kube-api-access-ntsjv") pod "d0e53367-1221-4294-8433-0aa02b6b3271" (UID: "d0e53367-1221-4294-8433-0aa02b6b3271"). InnerVolumeSpecName "kube-api-access-ntsjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:10:05 crc kubenswrapper[4897]: I0228 15:10:05.115588 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntsjv\" (UniqueName: \"kubernetes.io/projected/d0e53367-1221-4294-8433-0aa02b6b3271-kube-api-access-ntsjv\") on node \"crc\" DevicePath \"\"" Feb 28 15:10:05 crc kubenswrapper[4897]: I0228 15:10:05.546444 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538190-drtwg" event={"ID":"d0e53367-1221-4294-8433-0aa02b6b3271","Type":"ContainerDied","Data":"8571af72da707966373503979018759cb99d47dd0910b636d0cc6141c2bfa771"} Feb 28 15:10:05 crc kubenswrapper[4897]: I0228 15:10:05.546502 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8571af72da707966373503979018759cb99d47dd0910b636d0cc6141c2bfa771" Feb 28 15:10:05 crc kubenswrapper[4897]: I0228 15:10:05.546579 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538190-drtwg" Feb 28 15:10:06 crc kubenswrapper[4897]: I0228 15:10:06.021412 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538184-drddn"] Feb 28 15:10:06 crc kubenswrapper[4897]: I0228 15:10:06.045018 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538184-drddn"] Feb 28 15:10:06 crc kubenswrapper[4897]: I0228 15:10:06.479091 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5dec02-18c9-4d9c-8815-b97f620307ac" path="/var/lib/kubelet/pods/1f5dec02-18c9-4d9c-8815-b97f620307ac/volumes" Feb 28 15:10:55 crc kubenswrapper[4897]: I0228 15:10:55.067909 4897 scope.go:117] "RemoveContainer" containerID="4ae22418f23c48012f4ccd1a552a27201d802cd96319582d1bd7575cd4de6c6b" Feb 28 15:11:02 crc kubenswrapper[4897]: I0228 15:11:02.938474 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kg25s"] Feb 28 15:11:02 crc kubenswrapper[4897]: E0228 15:11:02.939620 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e53367-1221-4294-8433-0aa02b6b3271" containerName="oc" Feb 28 15:11:02 crc kubenswrapper[4897]: I0228 15:11:02.939638 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e53367-1221-4294-8433-0aa02b6b3271" containerName="oc" Feb 28 15:11:02 crc kubenswrapper[4897]: I0228 15:11:02.939910 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e53367-1221-4294-8433-0aa02b6b3271" containerName="oc" Feb 28 15:11:02 crc kubenswrapper[4897]: I0228 15:11:02.941683 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.014237 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kg25s"] Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.045566 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-utilities\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.046070 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rsm\" (UniqueName: \"kubernetes.io/projected/149b4a5e-702d-41b6-9191-9583320f565a-kube-api-access-v9rsm\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.046172 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-catalog-content\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.147736 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rsm\" (UniqueName: \"kubernetes.io/projected/149b4a5e-702d-41b6-9191-9583320f565a-kube-api-access-v9rsm\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.147820 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-catalog-content\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.147921 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-utilities\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.148491 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-utilities\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.149054 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-catalog-content\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.170767 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rsm\" (UniqueName: \"kubernetes.io/projected/149b4a5e-702d-41b6-9191-9583320f565a-kube-api-access-v9rsm\") pod \"community-operators-kg25s\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.326338 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.371673 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.371719 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:11:03 crc kubenswrapper[4897]: I0228 15:11:03.859720 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kg25s"] Feb 28 15:11:04 crc kubenswrapper[4897]: I0228 15:11:04.552883 4897 generic.go:334] "Generic (PLEG): container finished" podID="149b4a5e-702d-41b6-9191-9583320f565a" containerID="d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c" exitCode=0 Feb 28 15:11:04 crc kubenswrapper[4897]: I0228 15:11:04.553284 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg25s" event={"ID":"149b4a5e-702d-41b6-9191-9583320f565a","Type":"ContainerDied","Data":"d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c"} Feb 28 15:11:04 crc kubenswrapper[4897]: I0228 15:11:04.553358 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg25s" event={"ID":"149b4a5e-702d-41b6-9191-9583320f565a","Type":"ContainerStarted","Data":"dff0526ae50d20c4356149196ee5c190855f4a2d6c8f6ee40ec9028d82b056cc"} Feb 28 15:11:06 crc kubenswrapper[4897]: I0228 15:11:06.594166 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg25s" event={"ID":"149b4a5e-702d-41b6-9191-9583320f565a","Type":"ContainerStarted","Data":"8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53"} Feb 28 15:11:07 crc kubenswrapper[4897]: I0228 15:11:07.619386 4897 generic.go:334] "Generic (PLEG): container finished" podID="149b4a5e-702d-41b6-9191-9583320f565a" containerID="8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53" exitCode=0 Feb 28 15:11:07 crc kubenswrapper[4897]: I0228 15:11:07.619456 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg25s" event={"ID":"149b4a5e-702d-41b6-9191-9583320f565a","Type":"ContainerDied","Data":"8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53"} Feb 28 15:11:08 crc kubenswrapper[4897]: I0228 15:11:08.636204 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg25s" event={"ID":"149b4a5e-702d-41b6-9191-9583320f565a","Type":"ContainerStarted","Data":"aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a"} Feb 28 15:11:08 crc kubenswrapper[4897]: I0228 15:11:08.670783 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kg25s" podStartSLOduration=3.158971001 podStartE2EDuration="6.670759129s" podCreationTimestamp="2026-02-28 15:11:02 +0000 UTC" firstStartedPulling="2026-02-28 15:11:04.556301302 +0000 UTC m=+6878.798621989" lastFinishedPulling="2026-02-28 15:11:08.06808942 +0000 UTC m=+6882.310410117" observedRunningTime="2026-02-28 15:11:08.663844083 +0000 UTC m=+6882.906164750" watchObservedRunningTime="2026-02-28 15:11:08.670759129 +0000 UTC m=+6882.913079806" Feb 28 15:11:10 crc kubenswrapper[4897]: I0228 15:11:10.664619 4897 generic.go:334] "Generic (PLEG): container finished" podID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerID="c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5" exitCode=0 Feb 28 15:11:10 crc kubenswrapper[4897]: I0228 15:11:10.664662 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sxspm/must-gather-gcj7h" event={"ID":"029ca5c7-ae36-4e20-922c-c77b9b423ab9","Type":"ContainerDied","Data":"c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5"} Feb 28 15:11:10 crc kubenswrapper[4897]: I0228 15:11:10.665223 4897 scope.go:117] "RemoveContainer" containerID="c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5" Feb 28 15:11:11 crc kubenswrapper[4897]: I0228 15:11:11.527789 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sxspm_must-gather-gcj7h_029ca5c7-ae36-4e20-922c-c77b9b423ab9/gather/0.log" Feb 28 15:11:13 crc kubenswrapper[4897]: I0228 15:11:13.327400 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:13 crc kubenswrapper[4897]: I0228 15:11:13.328535 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:13 crc kubenswrapper[4897]: I0228 15:11:13.377568 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:13 crc kubenswrapper[4897]: I0228 15:11:13.774994 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:13 crc kubenswrapper[4897]: I0228 15:11:13.842778 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kg25s"] Feb 28 15:11:15 crc kubenswrapper[4897]: I0228 15:11:15.730097 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kg25s" podUID="149b4a5e-702d-41b6-9191-9583320f565a" containerName="registry-server" containerID="cri-o://aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a" gracePeriod=2 Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.194856 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.287579 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-catalog-content\") pod \"149b4a5e-702d-41b6-9191-9583320f565a\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.287717 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9rsm\" (UniqueName: \"kubernetes.io/projected/149b4a5e-702d-41b6-9191-9583320f565a-kube-api-access-v9rsm\") pod \"149b4a5e-702d-41b6-9191-9583320f565a\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.287798 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-utilities\") pod \"149b4a5e-702d-41b6-9191-9583320f565a\" (UID: \"149b4a5e-702d-41b6-9191-9583320f565a\") " Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.290002 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-utilities" (OuterVolumeSpecName: "utilities") pod "149b4a5e-702d-41b6-9191-9583320f565a" (UID: "149b4a5e-702d-41b6-9191-9583320f565a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.301141 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b4a5e-702d-41b6-9191-9583320f565a-kube-api-access-v9rsm" (OuterVolumeSpecName: "kube-api-access-v9rsm") pod "149b4a5e-702d-41b6-9191-9583320f565a" (UID: "149b4a5e-702d-41b6-9191-9583320f565a"). InnerVolumeSpecName "kube-api-access-v9rsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.390600 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9rsm\" (UniqueName: \"kubernetes.io/projected/149b4a5e-702d-41b6-9191-9583320f565a-kube-api-access-v9rsm\") on node \"crc\" DevicePath \"\"" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.390926 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.392781 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b4a5e-702d-41b6-9191-9583320f565a" (UID: "149b4a5e-702d-41b6-9191-9583320f565a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.493117 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b4a5e-702d-41b6-9191-9583320f565a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.743346 4897 generic.go:334] "Generic (PLEG): container finished" podID="149b4a5e-702d-41b6-9191-9583320f565a" containerID="aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a" exitCode=0 Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.743397 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kg25s" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.743411 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg25s" event={"ID":"149b4a5e-702d-41b6-9191-9583320f565a","Type":"ContainerDied","Data":"aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a"} Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.743452 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kg25s" event={"ID":"149b4a5e-702d-41b6-9191-9583320f565a","Type":"ContainerDied","Data":"dff0526ae50d20c4356149196ee5c190855f4a2d6c8f6ee40ec9028d82b056cc"} Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.743489 4897 scope.go:117] "RemoveContainer" containerID="aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.773496 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kg25s"] Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.781618 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kg25s"] Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.785672 4897 scope.go:117] "RemoveContainer" containerID="8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.818909 4897 scope.go:117] "RemoveContainer" containerID="d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.859344 4897 scope.go:117] "RemoveContainer" containerID="aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a" Feb 28 15:11:16 crc kubenswrapper[4897]: E0228 15:11:16.859884 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a\": container with ID starting with aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a not found: ID does not exist" containerID="aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.859934 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a"} err="failed to get container status \"aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a\": rpc error: code = NotFound desc = could not find container \"aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a\": container with ID starting with aa11f913e89f465aa1f73f0c2f832cdf18f0fc37f12e12b317f0b69b42a1030a not found: ID does not exist" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.859974 4897 scope.go:117] "RemoveContainer" containerID="8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53" Feb 28 15:11:16 crc kubenswrapper[4897]: E0228 15:11:16.860346 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53\": container with ID starting with 8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53 not found: ID does not exist" containerID="8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.860389 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53"} err="failed to get container status \"8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53\": rpc error: code = NotFound desc = could not find container \"8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53\": container with ID starting with 8f45b417bce8920e4bf312eb8c64f3e1be7e460f4df78559be70c7d6b6c39f53 not found: ID does not exist" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.860414 4897 scope.go:117] "RemoveContainer" containerID="d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c" Feb 28 15:11:16 crc kubenswrapper[4897]: E0228 15:11:16.860708 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c\": container with ID starting with d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c not found: ID does not exist" containerID="d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c" Feb 28 15:11:16 crc kubenswrapper[4897]: I0228 15:11:16.860746 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c"} err="failed to get container status \"d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c\": rpc error: code = NotFound desc = could not find container \"d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c\": container with ID starting with d6d6ca51bca915608cf040a5ae6dd20836ba15727d68deb885280daa2a18458c not found: ID does not exist" Feb 28 15:11:18 crc kubenswrapper[4897]: I0228 15:11:18.480148 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b4a5e-702d-41b6-9191-9583320f565a" path="/var/lib/kubelet/pods/149b4a5e-702d-41b6-9191-9583320f565a/volumes" Feb 28 15:11:23 crc kubenswrapper[4897]: I0228 15:11:23.879628 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sxspm/must-gather-gcj7h"] Feb 28 15:11:23 crc kubenswrapper[4897]: I0228 15:11:23.880614 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sxspm/must-gather-gcj7h" podUID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerName="copy" containerID="cri-o://9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755" gracePeriod=2 Feb 28 15:11:23 crc kubenswrapper[4897]: I0228 15:11:23.895512 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sxspm/must-gather-gcj7h"] Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.362596 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sxspm_must-gather-gcj7h_029ca5c7-ae36-4e20-922c-c77b9b423ab9/copy/0.log" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.363325 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.383157 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzl89\" (UniqueName: \"kubernetes.io/projected/029ca5c7-ae36-4e20-922c-c77b9b423ab9-kube-api-access-tzl89\") pod \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\" (UID: \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\") " Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.383608 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/029ca5c7-ae36-4e20-922c-c77b9b423ab9-must-gather-output\") pod \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\" (UID: \"029ca5c7-ae36-4e20-922c-c77b9b423ab9\") " Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.391614 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/029ca5c7-ae36-4e20-922c-c77b9b423ab9-kube-api-access-tzl89" (OuterVolumeSpecName: "kube-api-access-tzl89") pod "029ca5c7-ae36-4e20-922c-c77b9b423ab9" (UID: "029ca5c7-ae36-4e20-922c-c77b9b423ab9"). InnerVolumeSpecName "kube-api-access-tzl89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.485478 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzl89\" (UniqueName: \"kubernetes.io/projected/029ca5c7-ae36-4e20-922c-c77b9b423ab9-kube-api-access-tzl89\") on node \"crc\" DevicePath \"\"" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.637846 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/029ca5c7-ae36-4e20-922c-c77b9b423ab9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "029ca5c7-ae36-4e20-922c-c77b9b423ab9" (UID: "029ca5c7-ae36-4e20-922c-c77b9b423ab9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.689616 4897 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/029ca5c7-ae36-4e20-922c-c77b9b423ab9-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.844238 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sxspm_must-gather-gcj7h_029ca5c7-ae36-4e20-922c-c77b9b423ab9/copy/0.log" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.845262 4897 generic.go:334] "Generic (PLEG): container finished" podID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerID="9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755" exitCode=143 Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.845335 4897 scope.go:117] "RemoveContainer" containerID="9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.845346 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sxspm/must-gather-gcj7h" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.869999 4897 scope.go:117] "RemoveContainer" containerID="c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.924224 4897 scope.go:117] "RemoveContainer" containerID="9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755" Feb 28 15:11:24 crc kubenswrapper[4897]: E0228 15:11:24.924734 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755\": container with ID starting with 9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755 not found: ID does not exist" containerID="9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.924765 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755"} err="failed to get container status \"9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755\": rpc error: code = NotFound desc = could not find container \"9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755\": container with ID starting with 9e3c85fe98291eb7503a81ae8fb532f7a44894e725611a54dfe6fbe01d970755 not found: ID does not exist" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.924790 4897 scope.go:117] "RemoveContainer" containerID="c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5" Feb 28 15:11:24 crc kubenswrapper[4897]: E0228 15:11:24.925124 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5\": container with ID starting with c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5 not found: ID does not exist" containerID="c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5" Feb 28 15:11:24 crc kubenswrapper[4897]: I0228 15:11:24.925165 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5"} err="failed to get container status \"c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5\": rpc error: code = NotFound desc = could not find container \"c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5\": container with ID starting with c5438a75b0fba51ed17c4fd48ca7625bc9000944b07fa5faef9b9e2d847bb3b5 not found: ID does not exist" Feb 28 15:11:26 crc kubenswrapper[4897]: I0228 15:11:26.502845 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" path="/var/lib/kubelet/pods/029ca5c7-ae36-4e20-922c-c77b9b423ab9/volumes" Feb 28 15:11:33 crc kubenswrapper[4897]: I0228 15:11:33.371413 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:11:33 crc kubenswrapper[4897]: I0228 15:11:33.372145 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.201786 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538192-fm9xp"] Feb 28 15:12:00 crc kubenswrapper[4897]: E0228 15:12:00.202959 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerName="copy" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.202974 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerName="copy" Feb 28 15:12:00 crc kubenswrapper[4897]: E0228 15:12:00.202995 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149b4a5e-702d-41b6-9191-9583320f565a" containerName="registry-server" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.203003 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="149b4a5e-702d-41b6-9191-9583320f565a" containerName="registry-server" Feb 28 15:12:00 crc kubenswrapper[4897]: E0228 15:12:00.203013 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerName="gather" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.203020 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerName="gather" Feb 28 15:12:00 crc kubenswrapper[4897]: E0228 15:12:00.203052 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149b4a5e-702d-41b6-9191-9583320f565a" containerName="extract-utilities" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.203060 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="149b4a5e-702d-41b6-9191-9583320f565a" containerName="extract-utilities" Feb 28 15:12:00 crc kubenswrapper[4897]: E0228 15:12:00.203082 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149b4a5e-702d-41b6-9191-9583320f565a" containerName="extract-content" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.203089 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="149b4a5e-702d-41b6-9191-9583320f565a" containerName="extract-content" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.203320 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerName="gather" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.203356 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="029ca5c7-ae36-4e20-922c-c77b9b423ab9" containerName="copy" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.203371 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="149b4a5e-702d-41b6-9191-9583320f565a" containerName="registry-server" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.205807 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.210896 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.210964 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.212113 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.230360 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538192-fm9xp"] Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.301407 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpv27\" (UniqueName: \"kubernetes.io/projected/87c926f7-85dc-4df2-8cc8-a9f702b3771f-kube-api-access-vpv27\") pod \"auto-csr-approver-29538192-fm9xp\" (UID: \"87c926f7-85dc-4df2-8cc8-a9f702b3771f\") " pod="openshift-infra/auto-csr-approver-29538192-fm9xp" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.403982 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpv27\" (UniqueName: \"kubernetes.io/projected/87c926f7-85dc-4df2-8cc8-a9f702b3771f-kube-api-access-vpv27\") pod \"auto-csr-approver-29538192-fm9xp\" (UID: \"87c926f7-85dc-4df2-8cc8-a9f702b3771f\") " pod="openshift-infra/auto-csr-approver-29538192-fm9xp" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.446038 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpv27\" (UniqueName: \"kubernetes.io/projected/87c926f7-85dc-4df2-8cc8-a9f702b3771f-kube-api-access-vpv27\") pod \"auto-csr-approver-29538192-fm9xp\" (UID: \"87c926f7-85dc-4df2-8cc8-a9f702b3771f\") " pod="openshift-infra/auto-csr-approver-29538192-fm9xp" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.539462 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" Feb 28 15:12:00 crc kubenswrapper[4897]: I0228 15:12:00.978015 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538192-fm9xp"] Feb 28 15:12:01 crc kubenswrapper[4897]: I0228 15:12:01.306336 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" event={"ID":"87c926f7-85dc-4df2-8cc8-a9f702b3771f","Type":"ContainerStarted","Data":"c5ef93897976c4c97973258daec555e0a8262c1047ee1a1b936ce0a109f45d4f"} Feb 28 15:12:02 crc kubenswrapper[4897]: I0228 15:12:02.327941 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" event={"ID":"87c926f7-85dc-4df2-8cc8-a9f702b3771f","Type":"ContainerStarted","Data":"8fc582038ea878fa22c5101d89c95ef5ab8a3d034388bbc62b18103e26290389"} Feb 28 15:12:02 crc kubenswrapper[4897]: I0228 15:12:02.342404 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" podStartSLOduration=1.5455887659999998 podStartE2EDuration="2.342387095s" podCreationTimestamp="2026-02-28 15:12:00 +0000 UTC" firstStartedPulling="2026-02-28 15:12:00.970729115 +0000 UTC m=+6935.213049802" lastFinishedPulling="2026-02-28 15:12:01.767527454 +0000 UTC m=+6936.009848131" observedRunningTime="2026-02-28 15:12:02.34010701 +0000 UTC m=+6936.582427667" watchObservedRunningTime="2026-02-28 15:12:02.342387095 +0000 UTC m=+6936.584707752" Feb 28 15:12:02 crc kubenswrapper[4897]: E0228 15:12:02.559252 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87c926f7_85dc_4df2_8cc8_a9f702b3771f.slice/crio-conmon-8fc582038ea878fa22c5101d89c95ef5ab8a3d034388bbc62b18103e26290389.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87c926f7_85dc_4df2_8cc8_a9f702b3771f.slice/crio-8fc582038ea878fa22c5101d89c95ef5ab8a3d034388bbc62b18103e26290389.scope\": RecentStats: unable to find data in memory cache]" Feb 28 15:12:03 crc kubenswrapper[4897]: I0228 15:12:03.344228 4897 generic.go:334] "Generic (PLEG): container finished" podID="87c926f7-85dc-4df2-8cc8-a9f702b3771f" containerID="8fc582038ea878fa22c5101d89c95ef5ab8a3d034388bbc62b18103e26290389" exitCode=0 Feb 28 15:12:03 crc kubenswrapper[4897]: I0228 15:12:03.344679 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" event={"ID":"87c926f7-85dc-4df2-8cc8-a9f702b3771f","Type":"ContainerDied","Data":"8fc582038ea878fa22c5101d89c95ef5ab8a3d034388bbc62b18103e26290389"} Feb 28 15:12:03 crc kubenswrapper[4897]: I0228 15:12:03.371266 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:12:03 crc kubenswrapper[4897]: I0228 15:12:03.371338 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:12:03 crc kubenswrapper[4897]: I0228 15:12:03.371379 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brq22" Feb 28 15:12:03 crc kubenswrapper[4897]: I0228 15:12:03.372194 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"683c189040de62491242f0b16208dca83b8659111e6285e55466bae109b218e5"} pod="openshift-machine-config-operator/machine-config-daemon-brq22" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 15:12:03 crc kubenswrapper[4897]: I0228 15:12:03.372258 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" containerID="cri-o://683c189040de62491242f0b16208dca83b8659111e6285e55466bae109b218e5" gracePeriod=600 Feb 28 15:12:04 crc kubenswrapper[4897]: I0228 15:12:04.368819 4897 generic.go:334] "Generic (PLEG): container finished" podID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerID="683c189040de62491242f0b16208dca83b8659111e6285e55466bae109b218e5" exitCode=0 Feb 28 15:12:04 crc kubenswrapper[4897]: I0228 15:12:04.369077 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerDied","Data":"683c189040de62491242f0b16208dca83b8659111e6285e55466bae109b218e5"} Feb 28 15:12:04 crc kubenswrapper[4897]: I0228 15:12:04.369679 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brq22" event={"ID":"6c4091e4-3a55-4913-81f3-026a1a97c57c","Type":"ContainerStarted","Data":"db18403668018a266b5db5001fafa7efd1a2359bddbe81987e5d7bf811dd70b6"} Feb 28 15:12:04 crc kubenswrapper[4897]: I0228 15:12:04.369702 4897 scope.go:117] "RemoveContainer" containerID="c106c4981df1ab91355a675145a1485c568beae35eb691f1e94573b338233899" Feb 28 15:12:04 crc kubenswrapper[4897]: I0228 15:12:04.773627 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" Feb 28 15:12:04 crc kubenswrapper[4897]: I0228 15:12:04.913935 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpv27\" (UniqueName: \"kubernetes.io/projected/87c926f7-85dc-4df2-8cc8-a9f702b3771f-kube-api-access-vpv27\") pod \"87c926f7-85dc-4df2-8cc8-a9f702b3771f\" (UID: \"87c926f7-85dc-4df2-8cc8-a9f702b3771f\") " Feb 28 15:12:04 crc kubenswrapper[4897]: I0228 15:12:04.923680 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c926f7-85dc-4df2-8cc8-a9f702b3771f-kube-api-access-vpv27" (OuterVolumeSpecName: "kube-api-access-vpv27") pod "87c926f7-85dc-4df2-8cc8-a9f702b3771f" (UID: "87c926f7-85dc-4df2-8cc8-a9f702b3771f"). InnerVolumeSpecName "kube-api-access-vpv27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:12:05 crc kubenswrapper[4897]: I0228 15:12:05.016753 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpv27\" (UniqueName: \"kubernetes.io/projected/87c926f7-85dc-4df2-8cc8-a9f702b3771f-kube-api-access-vpv27\") on node \"crc\" DevicePath \"\"" Feb 28 15:12:05 crc kubenswrapper[4897]: I0228 15:12:05.393466 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" event={"ID":"87c926f7-85dc-4df2-8cc8-a9f702b3771f","Type":"ContainerDied","Data":"c5ef93897976c4c97973258daec555e0a8262c1047ee1a1b936ce0a109f45d4f"} Feb 28 15:12:05 crc kubenswrapper[4897]: I0228 15:12:05.393527 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ef93897976c4c97973258daec555e0a8262c1047ee1a1b936ce0a109f45d4f" Feb 28 15:12:05 crc kubenswrapper[4897]: I0228 15:12:05.393559 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538192-fm9xp" Feb 28 15:12:05 crc kubenswrapper[4897]: I0228 15:12:05.450488 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538186-hjpq4"] Feb 28 15:12:05 crc kubenswrapper[4897]: I0228 15:12:05.464543 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538186-hjpq4"] Feb 28 15:12:06 crc kubenswrapper[4897]: I0228 15:12:06.479010 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e46b8174-fe65-446e-965a-6786bbefd8ba" path="/var/lib/kubelet/pods/e46b8174-fe65-446e-965a-6786bbefd8ba/volumes" Feb 28 15:12:55 crc kubenswrapper[4897]: I0228 15:12:55.239003 4897 scope.go:117] "RemoveContainer" containerID="ed2d3e3fa1853287ebf283ce316efc4f4895270bf54e63700ec9cb7a51e8f3bd" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.723855 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njrcs"] Feb 28 15:13:35 crc kubenswrapper[4897]: E0228 15:13:35.725427 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c926f7-85dc-4df2-8cc8-a9f702b3771f" containerName="oc" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.725454 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c926f7-85dc-4df2-8cc8-a9f702b3771f" containerName="oc" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.725872 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c926f7-85dc-4df2-8cc8-a9f702b3771f" containerName="oc" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.729347 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.741395 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njrcs"] Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.792015 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-utilities\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.792140 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-catalog-content\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.792166 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxrd\" (UniqueName: \"kubernetes.io/projected/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-kube-api-access-cdxrd\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.894133 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-utilities\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.894284 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-catalog-content\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.894342 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdxrd\" (UniqueName: \"kubernetes.io/projected/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-kube-api-access-cdxrd\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.894783 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-utilities\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.894904 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-catalog-content\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:35 crc kubenswrapper[4897]: I0228 15:13:35.915015 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdxrd\" (UniqueName: \"kubernetes.io/projected/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-kube-api-access-cdxrd\") pod \"certified-operators-njrcs\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:36 crc kubenswrapper[4897]: I0228 15:13:36.084702 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:36 crc kubenswrapper[4897]: I0228 15:13:36.581988 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njrcs"] Feb 28 15:13:37 crc kubenswrapper[4897]: I0228 15:13:37.542736 4897 generic.go:334] "Generic (PLEG): container finished" podID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerID="1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615" exitCode=0 Feb 28 15:13:37 crc kubenswrapper[4897]: I0228 15:13:37.543060 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njrcs" event={"ID":"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5","Type":"ContainerDied","Data":"1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615"} Feb 28 15:13:37 crc kubenswrapper[4897]: I0228 15:13:37.543099 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njrcs" event={"ID":"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5","Type":"ContainerStarted","Data":"bdf282ce7da397e2a07630b07ce650008a3d7d761d114fe2fadf2bad0c98c8a5"} Feb 28 15:13:37 crc kubenswrapper[4897]: I0228 15:13:37.546221 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 15:13:39 crc kubenswrapper[4897]: I0228 15:13:39.569917 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njrcs" event={"ID":"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5","Type":"ContainerStarted","Data":"af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015"} Feb 28 15:13:40 crc kubenswrapper[4897]: I0228 15:13:40.594619 4897 generic.go:334] "Generic (PLEG): container finished" podID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerID="af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015" exitCode=0 Feb 28 15:13:40 crc kubenswrapper[4897]: I0228 15:13:40.595602 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njrcs" event={"ID":"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5","Type":"ContainerDied","Data":"af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015"} Feb 28 15:13:41 crc kubenswrapper[4897]: I0228 15:13:41.608634 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njrcs" event={"ID":"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5","Type":"ContainerStarted","Data":"dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47"} Feb 28 15:13:41 crc kubenswrapper[4897]: I0228 15:13:41.639803 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njrcs" podStartSLOduration=3.1982610989999998 podStartE2EDuration="6.639779736s" podCreationTimestamp="2026-02-28 15:13:35 +0000 UTC" firstStartedPulling="2026-02-28 15:13:37.545951244 +0000 UTC m=+7031.788271911" lastFinishedPulling="2026-02-28 15:13:40.987469861 +0000 UTC m=+7035.229790548" observedRunningTime="2026-02-28 15:13:41.629372261 +0000 UTC m=+7035.871692918" watchObservedRunningTime="2026-02-28 15:13:41.639779736 +0000 UTC m=+7035.882100393" Feb 28 15:13:46 crc kubenswrapper[4897]: I0228 15:13:46.085494 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:46 crc kubenswrapper[4897]: I0228 15:13:46.086294 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:46 crc kubenswrapper[4897]: I0228 15:13:46.159528 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:46 crc kubenswrapper[4897]: I0228 15:13:46.724291 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:46 crc kubenswrapper[4897]: I0228 15:13:46.801284 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njrcs"] Feb 28 15:13:48 crc kubenswrapper[4897]: I0228 15:13:48.686132 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njrcs" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerName="registry-server" containerID="cri-o://dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47" gracePeriod=2 Feb 28 15:13:48 crc kubenswrapper[4897]: I0228 15:13:48.836899 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgdq5"] Feb 28 15:13:48 crc kubenswrapper[4897]: I0228 15:13:48.839802 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:48 crc kubenswrapper[4897]: I0228 15:13:48.871015 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgdq5"] Feb 28 15:13:48 crc kubenswrapper[4897]: I0228 15:13:48.913343 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-utilities\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:48 crc kubenswrapper[4897]: I0228 15:13:48.913459 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-catalog-content\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:48 crc kubenswrapper[4897]: I0228 15:13:48.913529 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5r5\" (UniqueName: \"kubernetes.io/projected/6dbe69cc-7089-47e4-9594-729690989192-kube-api-access-vp5r5\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.015112 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-catalog-content\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.015179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5r5\" (UniqueName: \"kubernetes.io/projected/6dbe69cc-7089-47e4-9594-729690989192-kube-api-access-vp5r5\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.015293 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-utilities\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.015675 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-catalog-content\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.015723 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-utilities\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.035155 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5r5\" (UniqueName: \"kubernetes.io/projected/6dbe69cc-7089-47e4-9594-729690989192-kube-api-access-vp5r5\") pod \"redhat-marketplace-rgdq5\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.192745 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.317005 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.439260 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-catalog-content\") pod \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.439410 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdxrd\" (UniqueName: \"kubernetes.io/projected/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-kube-api-access-cdxrd\") pod \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.439519 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-utilities\") pod \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\" (UID: \"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5\") " Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.442476 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-utilities" (OuterVolumeSpecName: "utilities") pod "1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" (UID: "1eb13fb0-3ef8-49b4-927d-1cab5b128ee5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.445531 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-kube-api-access-cdxrd" (OuterVolumeSpecName: "kube-api-access-cdxrd") pod "1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" (UID: "1eb13fb0-3ef8-49b4-927d-1cab5b128ee5"). InnerVolumeSpecName "kube-api-access-cdxrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.522431 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" (UID: "1eb13fb0-3ef8-49b4-927d-1cab5b128ee5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.541905 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdxrd\" (UniqueName: \"kubernetes.io/projected/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-kube-api-access-cdxrd\") on node \"crc\" DevicePath \"\"" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.541932 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.541940 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.705596 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgdq5"] Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.717066 4897 generic.go:334] "Generic (PLEG): container finished" podID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerID="dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47" exitCode=0 Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.717127 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njrcs" event={"ID":"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5","Type":"ContainerDied","Data":"dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47"} Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.717153 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njrcs" event={"ID":"1eb13fb0-3ef8-49b4-927d-1cab5b128ee5","Type":"ContainerDied","Data":"bdf282ce7da397e2a07630b07ce650008a3d7d761d114fe2fadf2bad0c98c8a5"} Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.717159 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njrcs" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.717169 4897 scope.go:117] "RemoveContainer" containerID="dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.721427 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgdq5" event={"ID":"6dbe69cc-7089-47e4-9594-729690989192","Type":"ContainerStarted","Data":"4a0cc614b80201651b09ab226619744ba1528d14f91d320d0e6bdada0e14881f"} Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.750635 4897 scope.go:117] "RemoveContainer" containerID="af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.762526 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njrcs"] Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.771333 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njrcs"] Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.773120 4897 scope.go:117] "RemoveContainer" containerID="1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.802623 4897 scope.go:117] "RemoveContainer" containerID="dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47" Feb 28 15:13:49 crc kubenswrapper[4897]: E0228 15:13:49.803123 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47\": container with ID starting with dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47 not found: ID does not exist" containerID="dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.803163 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47"} err="failed to get container status \"dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47\": rpc error: code = NotFound desc = could not find container \"dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47\": container with ID starting with dfb0c97d27e277756d2f99fcab42f6e967c7b532c945771aa31b89a35e385c47 not found: ID does not exist" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.803194 4897 scope.go:117] "RemoveContainer" containerID="af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015" Feb 28 15:13:49 crc kubenswrapper[4897]: E0228 15:13:49.804072 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015\": container with ID starting with af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015 not found: ID does not exist" containerID="af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.804125 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015"} err="failed to get container status \"af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015\": rpc error: code = NotFound desc = could not find container \"af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015\": container with ID starting with af0bb2fa4dd9719597b35606dc669f71c4366c21cee02287179f595304746015 not found: ID does not exist" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.804182 4897 scope.go:117] "RemoveContainer" containerID="1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615" Feb 28 15:13:49 crc kubenswrapper[4897]: E0228 15:13:49.804608 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615\": container with ID starting with 1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615 not found: ID does not exist" containerID="1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615" Feb 28 15:13:49 crc kubenswrapper[4897]: I0228 15:13:49.804638 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615"} err="failed to get container status \"1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615\": rpc error: code = NotFound desc = could not find container \"1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615\": container with ID starting with 1c8dddaab0ba81d634c0ae2fc8d3beff5757d34a39fe26ce0c9fb752c34ce615 not found: ID does not exist" Feb 28 15:13:50 crc kubenswrapper[4897]: I0228 15:13:50.478821 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" path="/var/lib/kubelet/pods/1eb13fb0-3ef8-49b4-927d-1cab5b128ee5/volumes" Feb 28 15:13:50 crc kubenswrapper[4897]: I0228 15:13:50.735637 4897 generic.go:334] "Generic (PLEG): container finished" podID="6dbe69cc-7089-47e4-9594-729690989192" containerID="dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96" exitCode=0 Feb 28 15:13:50 crc kubenswrapper[4897]: I0228 15:13:50.735751 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgdq5" event={"ID":"6dbe69cc-7089-47e4-9594-729690989192","Type":"ContainerDied","Data":"dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96"} Feb 28 15:13:52 crc kubenswrapper[4897]: I0228 15:13:52.759445 4897 generic.go:334] "Generic (PLEG): container finished" podID="6dbe69cc-7089-47e4-9594-729690989192" containerID="ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671" exitCode=0 Feb 28 15:13:52 crc kubenswrapper[4897]: I0228 15:13:52.759578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgdq5" event={"ID":"6dbe69cc-7089-47e4-9594-729690989192","Type":"ContainerDied","Data":"ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671"} Feb 28 15:13:53 crc kubenswrapper[4897]: I0228 15:13:53.774649 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgdq5" event={"ID":"6dbe69cc-7089-47e4-9594-729690989192","Type":"ContainerStarted","Data":"18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92"} Feb 28 15:13:53 crc kubenswrapper[4897]: I0228 15:13:53.815693 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgdq5" podStartSLOduration=3.412471357 podStartE2EDuration="5.815667259s" podCreationTimestamp="2026-02-28 15:13:48 +0000 UTC" firstStartedPulling="2026-02-28 15:13:50.738208649 +0000 UTC m=+7044.980529336" lastFinishedPulling="2026-02-28 15:13:53.141404551 +0000 UTC m=+7047.383725238" observedRunningTime="2026-02-28 15:13:53.797672009 +0000 UTC m=+7048.039992706" watchObservedRunningTime="2026-02-28 15:13:53.815667259 +0000 UTC m=+7048.057987956" Feb 28 15:13:59 crc kubenswrapper[4897]: I0228 15:13:59.193019 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:59 crc kubenswrapper[4897]: I0228 15:13:59.193676 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:59 crc kubenswrapper[4897]: I0228 15:13:59.264461 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:59 crc kubenswrapper[4897]: I0228 15:13:59.942164 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:13:59 crc kubenswrapper[4897]: I0228 15:13:59.999162 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgdq5"] Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.159774 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29538194-w5xr9"] Feb 28 15:14:00 crc kubenswrapper[4897]: E0228 15:14:00.160353 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerName="extract-utilities" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.160380 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerName="extract-utilities" Feb 28 15:14:00 crc kubenswrapper[4897]: E0228 15:14:00.160398 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerName="registry-server" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.160408 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerName="registry-server" Feb 28 15:14:00 crc kubenswrapper[4897]: E0228 15:14:00.160458 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerName="extract-content" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.160469 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerName="extract-content" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.161016 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb13fb0-3ef8-49b4-927d-1cab5b128ee5" containerName="registry-server" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.161956 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538194-w5xr9" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.166433 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.166624 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-g6bdw" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.169800 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.191944 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538194-w5xr9"] Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.305824 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nggn\" (UniqueName: \"kubernetes.io/projected/d9d79072-eec3-4137-bf86-5adacaac8ab8-kube-api-access-8nggn\") pod \"auto-csr-approver-29538194-w5xr9\" (UID: \"d9d79072-eec3-4137-bf86-5adacaac8ab8\") " pod="openshift-infra/auto-csr-approver-29538194-w5xr9" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.409627 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nggn\" (UniqueName: \"kubernetes.io/projected/d9d79072-eec3-4137-bf86-5adacaac8ab8-kube-api-access-8nggn\") pod \"auto-csr-approver-29538194-w5xr9\" (UID: \"d9d79072-eec3-4137-bf86-5adacaac8ab8\") " pod="openshift-infra/auto-csr-approver-29538194-w5xr9" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.435657 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nggn\" (UniqueName: \"kubernetes.io/projected/d9d79072-eec3-4137-bf86-5adacaac8ab8-kube-api-access-8nggn\") pod \"auto-csr-approver-29538194-w5xr9\" (UID: \"d9d79072-eec3-4137-bf86-5adacaac8ab8\") " pod="openshift-infra/auto-csr-approver-29538194-w5xr9" Feb 28 15:14:00 crc kubenswrapper[4897]: I0228 15:14:00.498930 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538194-w5xr9" Feb 28 15:14:01 crc kubenswrapper[4897]: I0228 15:14:01.009499 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29538194-w5xr9"] Feb 28 15:14:01 crc kubenswrapper[4897]: I0228 15:14:01.889816 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538194-w5xr9" event={"ID":"d9d79072-eec3-4137-bf86-5adacaac8ab8","Type":"ContainerStarted","Data":"ffb45608fc47b04456f9f127ca96bdb837f29aac8243c91af624855e4eea6a95"} Feb 28 15:14:01 crc kubenswrapper[4897]: I0228 15:14:01.889984 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgdq5" podUID="6dbe69cc-7089-47e4-9594-729690989192" containerName="registry-server" containerID="cri-o://18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92" gracePeriod=2 Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.552030 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.662916 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp5r5\" (UniqueName: \"kubernetes.io/projected/6dbe69cc-7089-47e4-9594-729690989192-kube-api-access-vp5r5\") pod \"6dbe69cc-7089-47e4-9594-729690989192\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.663390 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-utilities\") pod \"6dbe69cc-7089-47e4-9594-729690989192\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.663592 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-catalog-content\") pod \"6dbe69cc-7089-47e4-9594-729690989192\" (UID: \"6dbe69cc-7089-47e4-9594-729690989192\") " Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.664541 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-utilities" (OuterVolumeSpecName: "utilities") pod "6dbe69cc-7089-47e4-9594-729690989192" (UID: "6dbe69cc-7089-47e4-9594-729690989192"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.666466 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dbe69cc-7089-47e4-9594-729690989192" (UID: "6dbe69cc-7089-47e4-9594-729690989192"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.698413 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbe69cc-7089-47e4-9594-729690989192-kube-api-access-vp5r5" (OuterVolumeSpecName: "kube-api-access-vp5r5") pod "6dbe69cc-7089-47e4-9594-729690989192" (UID: "6dbe69cc-7089-47e4-9594-729690989192"). InnerVolumeSpecName "kube-api-access-vp5r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.765595 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.765634 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbe69cc-7089-47e4-9594-729690989192-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.765652 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp5r5\" (UniqueName: \"kubernetes.io/projected/6dbe69cc-7089-47e4-9594-729690989192-kube-api-access-vp5r5\") on node \"crc\" DevicePath \"\"" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.909052 4897 generic.go:334] "Generic (PLEG): container finished" podID="6dbe69cc-7089-47e4-9594-729690989192" containerID="18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92" exitCode=0 Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.909103 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgdq5" event={"ID":"6dbe69cc-7089-47e4-9594-729690989192","Type":"ContainerDied","Data":"18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92"} Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.909164 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgdq5" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.909196 4897 scope.go:117] "RemoveContainer" containerID="18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.909177 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgdq5" event={"ID":"6dbe69cc-7089-47e4-9594-729690989192","Type":"ContainerDied","Data":"4a0cc614b80201651b09ab226619744ba1528d14f91d320d0e6bdada0e14881f"} Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.912020 4897 generic.go:334] "Generic (PLEG): container finished" podID="d9d79072-eec3-4137-bf86-5adacaac8ab8" containerID="9fa254715d069553111eda1c1def476d9f5f455e8d831f297f285fe00f312ab0" exitCode=0 Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.912062 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538194-w5xr9" event={"ID":"d9d79072-eec3-4137-bf86-5adacaac8ab8","Type":"ContainerDied","Data":"9fa254715d069553111eda1c1def476d9f5f455e8d831f297f285fe00f312ab0"} Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.943575 4897 scope.go:117] "RemoveContainer" containerID="ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671" Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.993253 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgdq5"] Feb 28 15:14:02 crc kubenswrapper[4897]: I0228 15:14:02.993522 4897 scope.go:117] "RemoveContainer" containerID="dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96" Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.011185 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgdq5"] Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.107677 4897 scope.go:117] "RemoveContainer" containerID="18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92" Feb 28 15:14:03 crc kubenswrapper[4897]: E0228 15:14:03.108397 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92\": container with ID starting with 18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92 not found: ID does not exist" containerID="18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92" Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.108471 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92"} err="failed to get container status \"18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92\": rpc error: code = NotFound desc = could not find container \"18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92\": container with ID starting with 18f79df626d66f06fc3108cc58b86107bdb742e569463f93127dbd9719caba92 not found: ID does not exist" Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.108522 4897 scope.go:117] "RemoveContainer" containerID="ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671" Feb 28 15:14:03 crc kubenswrapper[4897]: E0228 15:14:03.109230 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671\": container with ID starting with ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671 not found: ID does not exist" containerID="ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671" Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.109298 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671"} err="failed to get container status \"ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671\": rpc error: code = NotFound desc = could not find container \"ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671\": container with ID starting with ae62a00bce26ed644d95f405be6ece3c08ba636c6370d40482a61fc692dc6671 not found: ID does not exist" Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.109404 4897 scope.go:117] "RemoveContainer" containerID="dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96" Feb 28 15:14:03 crc kubenswrapper[4897]: E0228 15:14:03.109963 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96\": container with ID starting with dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96 not found: ID does not exist" containerID="dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96" Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.110003 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96"} err="failed to get container status \"dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96\": rpc error: code = NotFound desc = could not find container \"dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96\": container with ID starting with dd8fba7e2d770f3b07d2765c656bb13b59fc432b255f0f6035401c036a4b7d96 not found: ID does not exist" Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.370997 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:14:03 crc kubenswrapper[4897]: I0228 15:14:03.371293 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:14:04 crc kubenswrapper[4897]: I0228 15:14:04.398026 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538194-w5xr9" Feb 28 15:14:04 crc kubenswrapper[4897]: I0228 15:14:04.477743 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbe69cc-7089-47e4-9594-729690989192" path="/var/lib/kubelet/pods/6dbe69cc-7089-47e4-9594-729690989192/volumes" Feb 28 15:14:04 crc kubenswrapper[4897]: I0228 15:14:04.504868 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nggn\" (UniqueName: \"kubernetes.io/projected/d9d79072-eec3-4137-bf86-5adacaac8ab8-kube-api-access-8nggn\") pod \"d9d79072-eec3-4137-bf86-5adacaac8ab8\" (UID: \"d9d79072-eec3-4137-bf86-5adacaac8ab8\") " Feb 28 15:14:04 crc kubenswrapper[4897]: I0228 15:14:04.515641 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d79072-eec3-4137-bf86-5adacaac8ab8-kube-api-access-8nggn" (OuterVolumeSpecName: "kube-api-access-8nggn") pod "d9d79072-eec3-4137-bf86-5adacaac8ab8" (UID: "d9d79072-eec3-4137-bf86-5adacaac8ab8"). InnerVolumeSpecName "kube-api-access-8nggn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 15:14:04 crc kubenswrapper[4897]: I0228 15:14:04.608499 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nggn\" (UniqueName: \"kubernetes.io/projected/d9d79072-eec3-4137-bf86-5adacaac8ab8-kube-api-access-8nggn\") on node \"crc\" DevicePath \"\"" Feb 28 15:14:04 crc kubenswrapper[4897]: I0228 15:14:04.947012 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29538194-w5xr9" event={"ID":"d9d79072-eec3-4137-bf86-5adacaac8ab8","Type":"ContainerDied","Data":"ffb45608fc47b04456f9f127ca96bdb837f29aac8243c91af624855e4eea6a95"} Feb 28 15:14:04 crc kubenswrapper[4897]: I0228 15:14:04.947070 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffb45608fc47b04456f9f127ca96bdb837f29aac8243c91af624855e4eea6a95" Feb 28 15:14:04 crc kubenswrapper[4897]: I0228 15:14:04.947180 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29538194-w5xr9" Feb 28 15:14:05 crc kubenswrapper[4897]: I0228 15:14:05.508582 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29538188-xxktt"] Feb 28 15:14:05 crc kubenswrapper[4897]: I0228 15:14:05.518868 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29538188-xxktt"] Feb 28 15:14:06 crc kubenswrapper[4897]: I0228 15:14:06.477375 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18cea1a-5cd9-4e95-b89a-6345e4b812f2" path="/var/lib/kubelet/pods/b18cea1a-5cd9-4e95-b89a-6345e4b812f2/volumes" Feb 28 15:14:33 crc kubenswrapper[4897]: I0228 15:14:33.370978 4897 patch_prober.go:28] interesting pod/machine-config-daemon-brq22 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 15:14:33 crc kubenswrapper[4897]: I0228 15:14:33.372620 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brq22" podUID="6c4091e4-3a55-4913-81f3-026a1a97c57c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 15:14:55 crc kubenswrapper[4897]: I0228 15:14:55.374989 4897 scope.go:117] "RemoveContainer" containerID="b3f2005529d52556768067b4f3313c44a544b9eb4ab0fe78abcfd0511c25f66d" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.197205 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd"] Feb 28 15:15:00 crc kubenswrapper[4897]: E0228 15:15:00.198187 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbe69cc-7089-47e4-9594-729690989192" containerName="extract-utilities" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.198202 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbe69cc-7089-47e4-9594-729690989192" containerName="extract-utilities" Feb 28 15:15:00 crc kubenswrapper[4897]: E0228 15:15:00.198219 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbe69cc-7089-47e4-9594-729690989192" containerName="extract-content" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.198227 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbe69cc-7089-47e4-9594-729690989192" containerName="extract-content" Feb 28 15:15:00 crc kubenswrapper[4897]: E0228 15:15:00.198240 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbe69cc-7089-47e4-9594-729690989192" containerName="registry-server" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.198248 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbe69cc-7089-47e4-9594-729690989192" containerName="registry-server" Feb 28 15:15:00 crc kubenswrapper[4897]: E0228 15:15:00.198264 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d79072-eec3-4137-bf86-5adacaac8ab8" containerName="oc" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.198272 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d79072-eec3-4137-bf86-5adacaac8ab8" containerName="oc" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.198559 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbe69cc-7089-47e4-9594-729690989192" containerName="registry-server" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.198587 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d79072-eec3-4137-bf86-5adacaac8ab8" containerName="oc" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.199458 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.203819 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.204165 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.214479 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd"] Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.290927 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e340969b-faa0-4989-ba71-788e4f2b3ddc-config-volume\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.291041 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldf5l\" (UniqueName: \"kubernetes.io/projected/e340969b-faa0-4989-ba71-788e4f2b3ddc-kube-api-access-ldf5l\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.291175 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e340969b-faa0-4989-ba71-788e4f2b3ddc-secret-volume\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.392765 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldf5l\" (UniqueName: \"kubernetes.io/projected/e340969b-faa0-4989-ba71-788e4f2b3ddc-kube-api-access-ldf5l\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.392987 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e340969b-faa0-4989-ba71-788e4f2b3ddc-secret-volume\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.393080 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e340969b-faa0-4989-ba71-788e4f2b3ddc-config-volume\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.394447 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e340969b-faa0-4989-ba71-788e4f2b3ddc-config-volume\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.402498 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e340969b-faa0-4989-ba71-788e4f2b3ddc-secret-volume\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.415364 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldf5l\" (UniqueName: \"kubernetes.io/projected/e340969b-faa0-4989-ba71-788e4f2b3ddc-kube-api-access-ldf5l\") pod \"collect-profiles-29538195-2bjhd\" (UID: \"e340969b-faa0-4989-ba71-788e4f2b3ddc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:00 crc kubenswrapper[4897]: I0228 15:15:00.554028 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" Feb 28 15:15:01 crc kubenswrapper[4897]: I0228 15:15:01.063433 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd"] Feb 28 15:15:01 crc kubenswrapper[4897]: I0228 15:15:01.619921 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" event={"ID":"e340969b-faa0-4989-ba71-788e4f2b3ddc","Type":"ContainerStarted","Data":"7874c73d0e950417870e5e6a7a1f222be30568d0c8f1873ced49fba8d4cbebaa"} Feb 28 15:15:01 crc kubenswrapper[4897]: I0228 15:15:01.620384 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" event={"ID":"e340969b-faa0-4989-ba71-788e4f2b3ddc","Type":"ContainerStarted","Data":"46adcbeaec9620ccf62f50adb9a3325e9bc7e54589554bb30a792a22f7893d1d"} Feb 28 15:15:01 crc kubenswrapper[4897]: I0228 15:15:01.639671 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29538195-2bjhd" podStartSLOduration=1.639649624 podStartE2EDuration="1.639649624s" podCreationTimestamp="2026-02-28 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 15:15:01.63597748 +0000 UTC m=+7115.878298167" watchObservedRunningTime="2026-02-28 15:15:01.639649624 +0000 UTC m=+7115.881970301"